Speed up by ignoring std::cout

Test Code

  1. #include <iostream>
  2.  
  3. int main(int argc, char *argv[])
  4. {
  5. 	//std::cout.setstate(std::ios_base::badbit);
  6. 	for(int i = 0; i < 100; i ++) {
  7. 		for(int i = 0; i < 100; i ++) {
  8. 			;//std::cout << "" << std::endl;
  9. 		}
  10. 	}
  11. 	return 0;
  12. }

Test it without std::cout

  1. root@imx6ul7d:~/tmp# time ./t1_1 
  2.  
  3. real    0m0.041s
  4. user    0m0.020s
  5. sys     0m0.000s

Test it with std::cout, but redirect to /dev/null

  1. root@imx6ul7d:~/tmp# time ./t1_0 > /dev/null
  2.  
  3. real    0m0.096s
  4. user    0m0.030s
  5. sys     0m0.030s

Test it with std::cout, but set io state

  1. root@imx6ul7d:~/tmp# time ./t1_2
  2.  
  3. real    0m0.061s
  4. user    0m0.040s
  5. sys     0m0.000s

Profile : Linux

Example 1

Add -pg

  1. arm-linux-gnueabihf-g++ -Wall -g -pg hello.cpp -o hello -std=c++17

Test

  1. root@imx6ul7d:~# gprof -b hello 
  2. Flat profile:
  3.  
  4. Each sample counts as 0.01 seconds.
  5.   %   cumulative   self              self     total           
  6.  time   seconds   seconds    calls   s/call   s/call  name    
  7. 100.00      3.45     3.45        1     3.45     3.45  hehe()
  8.   0.00      3.45     0.00        4     0.00     0.00  std::_Optional_base<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >::_M_is_engaged() const
  9. ......

3 Sequence models & Attention mechanism

Various sequence to sequence architectures

Sequence to sequence models which are useful for everything from machine translation to speech recognition.

  • translate
  • image captioning

Picking the most likely sentence

 

Beam Search

You don’t want to output a random English translation, you want to output the best and the most likely English translation. Beam search is the most widely used algorithm to do this.

So, whereas greedy search will pick only the one most likely words and move on, Beam Search instead can consider multiple alternatives.  So, the Beam Search algorithm has a parameter called B, which is called the beam width.

Notice that what we ultimately care about in this second step would be to find the pair of the first and second words that is most likely. so it’s not just a second where is most likely but the pair of the first and second words most likely.

Evaluate all of these 30000 options according to the probability of the first and second words and then pick the top three. Because of beam width is equal to three, every step you instantiate three copies of the network to evaluate these partial sentence fragments and the output.

And it’s because of beam width is equal to three that you have three copies of the network with different choices for the first words,

Beam search will usually find a much better output sentence than greedy search.

Refinements to Beam Search

Length normalization is a small change to the beam search algorithm that can help you get much better results.

Numerical underflow. Meaning that it’s too small for the floating part representation in your computer to store accurately.

So in most implementations, you keep track of the sum of logs of the probabilities rather than the production of probabilities.

Instead of using this as the objective you’re trying to maximize, one thing you could do is normalize this by the number of words in your translation. And so this takes the average of the log of the probability of each word.

And this significantly reduces the penalty for outputting longer translations.

And in practice, as a heuristic instead of dividing by Ty, by the number of words in the output sentence, sometimes you use a softer approach. We have Ty to the power of alpha, where maybe alpha is equal to 0.7. So if alpha was equal to 1, then yeah, completely normalizing by length. If alpha was equal to 0, then, well, Ty to the 0 would be 1, then you’re just not normalizing at all. And this is somewhat in between full normalization and no normalization. And alpha’s another hyper parameter of algorithm that you can tune to try to get the best results.

Pick the one that achieves the highest value on this normalized log probability objective. Sometimes it’s called a normalized log likelihood objective.

In production systems, it’s not uncommon to see a beam width maybe around 10.

Exact search algorithms : 

  • BFS, Breadth First Search
  • DFS, Depth First Search

Beam search runs much faster but does not guarantee to find the exact maximum for this arg max that you would like to find.

Error analysis in beam search

Beam search is an approximate search algorithm, also called a heuristic search algorithm.

How error analysis interacts with beam search and how you can figure out whether it is the beam search algorithm that’s causing problems and worth spending time on. Or whether it might be your RNN model that is causing problems and worth spending time on.

Model : 

  • RNN model (neural network model or sequence to sequence model)
    • It’s really an encoder and a decoder.
    • P(y|x)
  • Beam search algorithm
\(\left\{\begin{matrix}
P(y^{*}|x) & use \ model\\
P(\hat y|x) & use \ RNN
\end{matrix}\right.\)

 

 

Bleu Score

How to evaluate a machine translation system

The way this is done conventionally is through something called the BLEU score.

What the BLEU score does is given a machine generated translation, it allows you to automatically compute a score that measures how good is that machine translation.

BLEU, by the way, stands for bilingual evaluation understudy.

Tthe intuition behind the BLEU score is we’re going to look at the machine generated output and see if the types of words it generates appear in at least one of the human generated references.

 
The reason the BLEU score was revolutionary for machine translation was because this gave a pretty good, by no means perfect, but pretty good single real number evaluation metric. And so that accelerated the progress of the entire field of machine translation.

Today, BLEU score is used to evaluate many systems that generate text, such as machine translation systems, as well as the example I showed briefly earlier of image captioning systems.

Attention Model Intuition

Attention Model, that makes RNN work much better.

  • It’s just difficult to get in your network to memorize a super long sentence.
  • But with an Attention Model, machine translation systems performance can look like this, because by working one part of the sentence at a time, 
    • What the Attention Model would be computing is a set of attention weights.

Attention Model

This algorithm runs in quadratic cost, Although in machine translation applications where neither input nor output sentences is usually that long maybe quadratic cost is actually acceptable.

Speech recognition

One of the most exciting developments were sequence-to-sequence models has been the rise of very accurate speech recognition.

A common pre-processing step for audio data is to run your raw audio clip and generate a spectrogram. So, this is the plots where the horizontal axis is time, and the vertical axis is frequencies, and intensity of different colors shows the amount of energy.

Once upon a time, speech recognition systems used to be built using phonemes and this where, hand-engineered basic units of cells. But with end-to-end deep learning, we’re finding that phonemes representations are no longer necessary.

 

Trigger Word Detection

When the rise of speech recognition have been more and more devices you can wake up with your voice and those are sometimes called trigger word detection systems.

 
The literature on trigger word detection algorithm is still evolving. So there isn’t wide consensus yet on what’s the best algorithm for trigger word detection.

One example of an algorithm you can use RNN like this and what we really do is take an audio clip maybe compute spectrogram features and that generates features \(x^{<1>},x^{<2>},x^{<3>}\) audio features \(x^{<1>},x^{<2>},x^{<3>}\). Then you pass to an RNN and so all that remains to be done is to define the target labels \(Y\). So if this point in the audio clip is when someone just finished saying the trigger word such as Alexa or xiaodunihao or hey Siri or okay Google. 

 
Then in the training sets you can set the target labels to be zero for everything before that point and right after that to set the target label of one.And then if a little bit later on the trigger word was said again, and the trigger was said at this point, then you can again set the target label to be one right after that.

One slight disadvantage of this is it creates a very imbalanced training set.So a lot more zeros than ones.

Instead of setting only a single time step to output one, you can actually make an output a few ones for several times or for a fixed period of time before reverting back to zero. So and that, slightly evens out the ratio of ones to zeros. But this is a little bit of a hack.

2 Natural Language Processing and Word Embeddings

Word Representation

NLP, Natural Language Processing

Word embeddings, which is a way of representing words. that let your algorithms automatically understand analogies like that, man is to woman, as king is to queen, and many other examples.

Representing words using a vocabulary of words.

One of the weaknesses of this representation is that it treats each word as a thing onto itself, and it doesn’t allow an algorithm to easily generalize the cross words.


You see plots like these sometimes on the internet to visualize some of these 300 or higher dimensional embeddings.

To visualize it, algorithms like t-SNE, map this to a much lower dimensional space.

Using Word Embeddings

Transfer learning and word embeddings

  1. Learn word embeddings from large text corpus. (1-100B words or download pre-trained embedding online.)
  2. Transfer embedding to new task with smaller training set. (say, 100k words)
  3. Optional: Continue to finetune the word embeddings with new data.

Properties of Word Embeddings

One of the most fascinating properties of word embeddings is that they can also help with analogy reasoning.

The most commonly used similarity function is called cosine similarity : \(CosineSimilarity(u,v) = \frac{u.v}{\left \| u \right \|_2\left \| v \right \|_2} = cos(\theta)\)

Embedding Matrix

When you implement an algorithm to learn a word embedding, what you end up learning is an embedding matrix.

And the columns of this matrix would be the different embeddings for the 10,000 different words you have in your vocabulary.

 

Learning Word Embeddings

It turns out that building a neural language model is a reasonable way to learn a set of embedding.

Well, what’s actually more commonly done is to have a fixed historical window.

And using a fixed history, just means that you can deal with even arbitrarily long sentences because the input sizes are always fixed.

If your goal is to learn a embedding. Researchers have experimented with many different types of context.

  • If your goal is to build a language model then it is natural for the context to be a few words right before the target word.
  • But if your goal isn’t to learn the language model per se, then you can choose other contexts.

Word2Vec

The Word2Vec algorithm which is simple and computationally more efficient way to learn this types of embeddings.

Skip-Gram model

\(\begin{matrix}
Softmax : & p(t|c) = \frac{e^{\theta ^T_t e_c}}{\sum _{j=1}^{10,000} e^{\theta ^T_j e_c}} \\
Loss Function : & L(\hat y, y) = – \sum _{i=1}^{10,000} y_i log \hat y _i
\end{matrix}\)

the primary problem is computational speed, because of the softmax step is very expensive to calculate because needing to sum over your entire vocabulary size into the denominator of the softmax.

a few solutions

    • hierarchical softmax classifier
    • negative sampling

CBow

the Continuous Bag-Of-Words Model, which takes the surrounding contexts from middle word, and and uses the surrounding words to try to predict the middle word.

Negative Sampling

What to do in this algorithm is create a new supervised learning problem. And the problem is, given a pair of words like orange and juice, we’re going to predict is this a context-target pair? It’s really to try to distinguish between these two types of distributions from which you might sample a pair of words.

How do you choose the negative examples?

  • sample the words in the middle, the candidate target words.
  • use 1 over the vocab size, sample the negative examples uniformly at random, but that’s also very non-representative of the distribution of English words.
  • the authors, Mikolov et al, reported that empirically, \(P(w_i) = \frac{f(w_i)^{\frac{3}{4}}}{\sum _{j=1}^{10,000}f(w_j)^{\frac{3}{4}}}\)

GloVe Word Vectors

GloVe stands for global vectors for word representation.

Sampling pairs of words, context and target words, by picking two words that appear in close proximity to each other in our text corpus. So, what the GloVe algorithm does is, it starts off just by making that explicit.

 

Sentiment Classification

Sentiment classification is the task of looking at a piece of text and telling if someone likes or dislikes the thing they’re talking about.

One of the challenges of sentiment classification is you might not have a huge label training set for it. But with word embeddings, you’re able to build good sentiment classifiers even with only modest-size label training sets.

One of the problems with this algorithm is it ignores word order.

More Sophisticated Model : 

 

Debiasing Word Embeddings

Machine learning and AI algorithms are increasingly trusted to help with, or to make, extremely important decisions. And so we like to make sure that as much as possible that they’re free of undesirable forms of bias, such as gender bias, ethnicity bias and so on.

  • So the first thing we’re going to do is identify the direction corresponding to a particular bias we want to reduce or eliminate.
  • the next step is a neutralization step. So for every word that’s not definitional, project it to get rid of bias.
  • And then the final step is called equalization in which you might have pairs of words such as grandmother and grandfather, or girl and boy, where you want the only difference in their embedding to be the gender.
  • And then, finally, the number of pairs you want to equalize, that’s actually also relatively small, and is, at least for the gender example, it is quite feasible to hand-pick.

1 Recurrent Neural Networks

Why Sequence Models?

Models like recurrent neural networks or RNNs have transformed speech recognition, natural language processing and other areas.

Notation

Suppose the input is the sequence of nine words. So, eventually we’re going to have nine sets of features to represent these nine words, and index into the positions in the sequence, I’m going to use \(x^{<1>}\), \(x^{<2>}\), \(x^{<3>}\) and so on up to \(x^{<9>}\) to index into the different positions.

use \(x^{<t>}\) to index into positions, in the middle of the sequence. And t implies that these are temporal sequences although whether the sequences are temporal one or not, I’m going to use the index t to index into the positions in the sequence.

Used \(T_{x}\) denote the length of the input sequence,

\(x^{(i)<t>}\) refer to the Tth element or the Tth element in the sequence of training example i

\(T_{x}^{(i)}\) is the length of sequence i

NLP or Natural Language Processing

Use one-hot representations to represent each of these words.

What if you encounter a word that is not in your vocabulary? Well the answer is, you create a new token or a new fake word called Unknown Word which under note as follows angle brackets UNK to represent words not in your vocabulary.

Recurrent Neural Network Model

Why not a standard network?

Problems:

  • Inputs, outputs can be different lengths in different examples.
  • Doesn’t share features learned across different positions of text.

And what a recurrent neural network does is when it then goes on to read the second word in a sentence, say X2, instead of just predicting Y2 using only X2, it also gets to input some information from what had computed that time-step one’s. At each time-step,  the recurrent neural network passes on this activation to the next time-step for it to use.

Now one limitation of this particular neural network structure is that the prediction at a certain time uses inputs or uses information from the inputs earlier in the sequence but not information later in the sequence. We will address this in a later video where we talk about a bidirectional recurrent neural networks or BRNNs.

The activation function used in to compute the activations will often be a tanh and the choice of an RNN and sometimes, Relu are also used although the tanh is actually a pretty common choice.

Simplified RNN notation : \(\begin{matrix}
a^{<t>} = g_1(W_{aa}a^{<t-1>} + W_{ax}x^{<t>} + b_a)\\
\hat y ^{<t>} = g_2(W_{ya}a^{<t>} + b_y)
\end{matrix}\)

Backpropagation through time

As usual, when you implement this in one of the programming frameworks, often, the programming framework will automatically take care of backpropagation.

Element-wise loss funtion : \(L^{<t>}(\hat y ^{<t>}, y ^{<t>}) = -y ^{<t>}log \hat y ^{<t>} – (1 – \hat y ^{<t>})log(1 – \hat y ^{<t>})\)
standard logistic regression loss also called the cross entropy loss.

Overall loss of the entire sequence : \(L(\hat y, y) = \sum _{t=1}^{T_x} L ^{<t>}(\hat y ^{<t>}, y^{<t>})\)

Backpropagation through time, And the motivation for this name is that for forward prop you are scanning from left to right, increasing indices of the time t, whereas the backpropagation, you’re going from right to left, kind of going backwards in time.

Different types of RNNs

Language model and sequence generation

What a language model does is given any sentence its job is to tell you what is the probability of a sentence, of that particular sentence. And this is a fundamental component for both speech recognition systems as you’ve just seen, as well as for machine translation systems where translation systems wants output.

How do you build a language model?

  • first need a training set comprising a large corpus of English text. Or text from whatever language you want to build a language model of. And the word corpus is an NLP terminology that just means a large body or a very large set of English text of English sentences.
    • The first thing you would do is tokenize this sentence. And that means you would form a vocabulary as we saw in an earlier video. And then map each of these words to, say, one-hot vectors, all to indices in your vocabulary.
    • One thing you might also want to do is model when sentences end. So another common thing to do is to add an extra token called a EOS.
  • Go on to built the RNN model
    • what \(a^{<1>}\) does is it will make a softmax prediction to try to figure out what is the probability of the first words y. And so that’s going to be y<1>. So what this step does is really, it has a softmax it’s trying to predict. What is the probability of any word in the dictionary?
    • Then, the RNN steps forward to the next step and has some activation, \(a^{<1>}\) to the next step. And at this step, this job is try to figure out, what is the second word?
    • whatever this given, everything that comes before, and hopefully it will predict that there’s a high chance of it, EOS end sentence token.

Sampling novel sequences

After you train a sequence model, one of the ways you can informally get a sense of what is learned is to have a sample novel sequences.

  • what you want to do is first sample what is the first word you want your model to generate.

Then you will generate a randomly chosen sentence from your RNN language model.

  • words level RNN
  • character level RNN
    • advantage : you don’t ever have to worry about unknown word tokens.
    • disadvantage : you end up with much more, much longer sequences.
      • so they are not in widespread used today. Except for maybe specialized applications where you might need to deal with unknown words or other vocabulary words a lot.

Vanishing gradients with RNNs

It turns out the basics RNN we’ve seen so far it’s not very good at capturing very long-term dependencies.

  • It turns out that vanishing gradients tends to be the bigger problem with training RNNs
  • although when exploding gradients happens, it can be catastrophic because the exponentially large gradients can cause your parameters to become so large that your neural network parameters get really messed up. So it turns out that exploding gradients are easier to spot because the parameters just blow up and you might often see NaNs, or not a numbers, meaning results of a numerical overflow in your neural network computation. 
    • And if you do see exploding gradients, one solution to that is apply gradient clipping. And what that really means, all that means is look at your gradient vectors, and if it is bigger than some threshold, re-scale some of your gradient vector so that is not too big. So there are clips according to some maximum value. So if you see exploding gradients, if your derivatives do explode or you see NaNs, just apply gradient clipping, and that’s a relatively robust solution that will take care of exploding gradients. 

Gated Recurrent Unit(GRU)

The Gated Recurrent Unit which is a modification to the RNN hidden layer that makes it much better capturing long range connections and helps a lot with the vanishing gradient problems.

 

 

 

 

 

 

 

 

 

The GRU unit is going to have a new variable called c which stands for cell, for memory cell. And what the memory cell do is it will provide a bit of memory to remember. \(\tilde{c} ^{<t>} = tanh (W_c [c ^{<t-1>}, x ^{<t>}] + b_c)\)

the important idea of the GRU : \(\begin{matrix}
\Gamma _u = \sigma(W_u[c^{<t-1>}, x^{<t>}] + b_u) \\
c^{<t>} = \Gamma _u * \tilde{c} ^{<t>} + (1 – \Gamma _u) * c^{<t-1>}
\end{matrix}\)

LSTM(long short term memory)unit

the long short term memory units, and this is even more powerful than the GRU.

 

Perhaps, the most common one is that instead of just having the gate values be dependent only on a^{<t-1>} , x^{<t>}, sometimes, people also sneak in there the values c^{<t-1>} as well. This is called a peephole connection.

GRU

  • relatively recent invention
  • a simpler model and so it is actually easier to build a much bigger network, it only has two gates, so computationally, it runs a bit faster. So, it scales the building somewhat bigger models

LSTM

  • actually came much earlier
  • more powerful and more flexible since it has three gates instead of two.

LSTM has been the historically more proven choice.

Bidirectional RNN

Bidirectional RNNs, which lets you at a point in time to take information from both earlier and later in the sequence.

In fact, for a lots of NLP problems, for a lot of text with natural language processing problems, a bidirectional RNN with a LSTM appears to be commonly used.

The disadvantage of the bidirectional RNN is that you do need the entire sequence of data before you can make predictions anywhere.

Deep RNNs

The different versions of RNNs you’ve seen so far will already work quite well by themselves. But for learning very complex functions sometimes it’s useful to stack multiple layers of RNNs together to build even deeper versions of these models.

For RNNs, having three layers is already quite a lot. Because of the temporal dimension, these networks can already get quite big even if you have just a small handful of layers. And you don’t usually see these stacked up to be like 100 layers. One thing you do see sometimes is that you have recurrent layers that are stacked on top of each other. But then you might take the output here, let’s get rid of this, and then just have a bunch of deep layers that are not connected horizontally but have a deep network here that then finally predicts y<1>.

4 Special applications : Face recognition & Neural style transfer

What is face recognition?

Liveness detection

Face Verification

  • Input image, name/ID
  • Output whether the input image is that of the claimed person

Face Recognition

  • Has a database of K persons
  • Get an input image
  • Output ID if the image is any of the K persons (or “not recognized”)

In fact we have a database of a hundred persons you probably need this to be even quite a bit higher than 99% for that to work well.

One-shot learning

One of the challenges of face recognition is that you need to solve the one-shot learning problem. What that means is that, for most face recognition applications, you need to recognize a person given just one single image, or given just one example of that person’s face. 

And historically, deep learning algorithms don’t work well if you have only one training example. So the carry-outs face recognition to carry out one-shot learning. So instead, to make this work, what you’re going to do instead is learning similarity function.

\(d(img1, img2) = degree\ of\ difference\ between\ images.\)

 

\(\left.\begin{matrix}
If \ \ d(img1, img2) \leq \tau & , same\\
\ \ \ \ \ \ \ \ \ \ \ > \tau & , different
\end{matrix}\right\}\)

 

Siamese network

Triplet loss

One way to learn the parameters of the neural network so that it gives you a good encoding for your pictures of faces is to define and apply gradient descent on the triplet loss function.

In the terminology of the triplet loss what you’re going to do is always look at one anchor image and then you want the distance between the anchor and a positive image really a positive example meaning is the same person to be similar. Whereas you want the anchor when pairs are compared to the negative example for their distances to be much further apart. So this is what gives rise to the term triplet loss which is that you always be looking at three images at a time, you’ll be looking at an anchor image a positive image as well as a negative image.

\(\left \| f(A) – f(P) \right \| ^2 – \left \| f(A) – f(N) \right \| ^2 + a \leq 0\) \(L(A,P,N) = max (\left \| f(A) – f(P) \right \| ^2 – \left \| f(A) – f(N) \right \| ^2 + a, 0)\)

For your face recognition system maybe you have only a single picture of someone you might be trying to recognize but for your training set you do need to make sure you have multiple images of the same person at least for some people in your training set so that you can have pairs of anchor and positive images.

Choosing the triplets A,P,N : 

During training, if A,P,N are chosen randomly,
\(d(A,P) + a \leq d(A,N)\) is easily satisfied.

So to construct a training set what you want to do is to choose triplets A P and N that are hard to train on this is one domain where because of the sheer data volume sizes this is one domain where often it might be useful for you to download someone else’s pretrained model rather than do everything from scratch yourself.

Face verification and binary classification

Take this pair of neural networks to take this siamese network and have them both compute these embeddings, maybe 128 dimensional embeddings, maybe even higher dimensional, and then have these be input to a logistic regression unit to then just make a prediction, where the target output will be 1 if both of these are the same persons, and 0 if both of these are of different persons. So this is a way to treat face recognition just as a binary classification problem.

\(\hat y = \sigma (\sum _{k=1}^{128} w_i | f(x^{(i)})_k – f(x^{(i)})_k| + b)\)

 

Help your deployment significantly : 

what you can do is actually pre compute that, so when the new employee walks in, what you can do is use this upper ConvNet to to compute that encoding and use it to then compare it  to your pre computed encoding, and then use that to make a prediction y hat.

What is neural style transfer?

In order to implement neural style transfer, you need to look at the features extracted by ConvNets, at various layers, the shallow and the deeper layers of a ConvNets.

What are deep ConvNets learning?






Cost function

Given a content image C and the style image S, then the goal is to generate a new image G.

\(J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)\)

Find the generated image G : 

  1. Initiate G randomly (G : 100 * 100 * 3)
  2. Use gradient descent to minimiza J(G)

Content cost function

  • Say you use hidden layer l to compute content cost.
  • Use pre-trained ConvNet. (E.g., VGG network)
\(J_{content}(C,G) = \frac {1}{2} \left \| a^{[l](C)} – a^{[l](G)} \right \| ^ 2\)
  • Let \(a^{[l](C)}\) and \(a^{[l](G)}\) be the activation of layer l on the images
  • If \(a^{[l](C)}\) and \(a^{[l](G)}\) are similar, both images have similar content

Style cost function

Style matrix : 

Let \(a_{i,j,k}^{[l]}\) = activation at \((i,j,k)\). \(G^{[l](s)}\) is \(n_{c}^{[l]} \times n_{c}^{[l]}\)

And it’s the degree of correlation that gives you one way of measuring how often these different high level features, such as vertical texture or this orange tint or other things as well. How often they occur and how often they occur together, and don’t occur together in different parts of an image.

Define this style image. \(G_{kk’}^{[l](G)} = \sum _{i=1}^{n_H^{[l]}} \sum _{j=1}^{n_W^{[l]}} a_{i,j,k}^{[l](G)} a_{i,j,k’}^{[l](G)} \) So G, defined using layer l and on the style image, is going to be a matrix, where the height and width of this matrix is the number of channels by number of channels. So in this matrix, the k, k prime element is going to measure how correlated our channels k and k prime.

Style cost function :

\(J_{style}^{[l]}(S,G) = \frac{1}{(2n_H^{[l]}n_W^{[l]}n_C^{[l]})^2} \sum _{k} \sum _{k’} (G_{kk’}^{[l](S)} – G_{kk’}^{[l](G)})\)

 

1D and 3D generalizations of models

For a long of 1d data applications you actually use a recurrent neural network.

Three-dimensional. And one way to think of this data is if your data now has some height, some width and then also some depth.