001.007.003 FashionMNIST Example CNN

001.007.003  FashionMNIST Example CNN

Show Video

so in this particular video we are going to deploy these convolutional layers and the pulling layers uh normally this convolutional and pulling layer combined is called the convolutional layer and this is a convolutional layer the pooling is not just max pooling you can have average pooling and there are other ways of pulling the kernel normally is the size of the size of the window that actually that actually slides over and computed features for you so yeah let's dive in uh to the Google collab and let's see first the code for the previous problem we solved with the ordinary neural network and let's convert that code we will be we will be generating a new notebook but mostly we will be copying pasting the code that we did and we'll be changing the cell that requires actually convolutional neural network part so let's go to Google collab and see that's our original neural network we imported tensorflow we import the data and then actually we after checking certain shape of the data in the last video that's our model that we train the neural network model where we have a flattened layer at the top then we have certain layers we have certain dropouts and we used atom Optimizer Sports category categorical cross entropy loss and accuracy model so that's mostly our model that we did in the last uh in the last video in this particular video uh deploying convolutional neural network we have to actually change this particular cell mostly we have to add some convolutional layer on top of these or maybe changing the structure of this and also we have to define the shape of our training data as well as test data into a form that convolutional neural network can accept so mostly these are the two cells that will be changing um with a little bit little bit change in the rest of the things everything else will stay the same so let's uh let's actually convert this notebook to a convolutional neural network notebook let's first generate a new notebook this one um so yeah it's generating a new notebook yeah it's ready uh let's rename it let's name it like um model CNN let's say this convolutional neural network model first of all let's import tensorflow as DF um it will obviously allocate initialize and connect to the server and will give us some resources uh this will be done very first time uh when we'll execute the code now it's everything is running um at least everything that we need that is running now what we do is we just copy and paste uh some of the data from the previous notebook and here we go we download the data um if you're not in Google colab if you are working on a PC if you have downloaded data that once if you run this command again again and again it will not go to internet and download it will actually pick that data from the cached copy you have next um we go back to our previous notebook we have a shape like this we have 60 000 images each image is 28 by 28 that's why training and stuff like so this is important thing that we need so let's copy this here the class names let's define the class names yeah they are defined now after that um we basically moved to this normalization but before normalization we actually have to define the structure of the structure of this structure of this data that actually is required for convolutional neural network the convolutional neural network when takes images it actually takes in the form that the very first um the the it takes an ND array where the very first Dimension represents the number of images and the rest of the dimensions that actually represents the the size or the dimension of images itself so for that what we do is we convert that into that particular form XTR training dot reshape reshape and what we do here is the following we pick the very first thing x d XTR dot shape 0 that will give us the total number of images and we reship everything with the total number of images times the dimension of one image so the the one image is 28 by 28 by 1 and this actually represents the total number of images so that converts in a form that the neural network the convolutional neural network layers they're happy with so we do the same with test data dot reshape so xdsd dot shape zero and then we have the size of each image 28 um by 28 by 1. so yeah so and and these are the scalings uh it's good always good to have the scalings um I mean the feature scaling is important in most of the cases um and and it is required this is not the only way to scale the features but this is one way easy way and it works most of the time so yeah it works very fine next we move to our model now everything is set now this is our model let's pick Let's copy and paste this and here there and let's see what what can we do with that so now that's our model let's call this model as CNN model let's have a CNN model that is sequential these are the layers here we have to actually uh if you if you go back to before actually applying this neural network we have to apply these convolutional lures convolutional layers to actually uh extract the features and this neural network at the end which will actually take care of the classification for that so let's extract certain features by deploying some convolutional layers so what I do is DF Dot kiras dot layers Dot conf 2D yeah conf 2D that's a layer in this constitutory I have to first Define the total number of channels or filters let's say we have 32 filters what's the size of each filter um that's hyper parameter again that's three by three that's the size of the filter and what's the input shape that it is expecting this is for the very first time input um input shape and that is equal to the input shape the input of our image is 28 by 28 by 1. so that's the convolutional layer that is added um below the convolutional layer we may have a Max pooling layer or or some pooling layer TF dot Keras dot layers dot Max um pulling 2D two-dimensional pooling and here we have to define basically the size of the pulling so that's a two by two we can have uh we can have for example we can have a Dropout below it we can have multiple convolutional layers for example we can have another convolutional layer here or maybe a Dropout layer here either way I mean the architecture really is a hyper parameter how to define that layers Dot conf 2D and again we can have the number of filters let's say the number of filters here is again 32 or you may have more or less number of filters um the size of each filter here um yeah the size of each filter uh here the size of the filter you can have a three by three filter or you can have um I mean either way here you need not to define the shape of the input because that is required for the very first time then we have this uh TF dot Keras dot layers um dot maybe Max pooling layer again 2D yeah and here you have to give the size of the pulling and after defining this convolutional structure here I have added just two convolutional layers uh follow each layer followed by Max pulling layer after defining that you have to flatten your input just like uh just like previously but you need not to give the input Dimensions here because that will be defined automatically and then you have to define the structure that you really want for the rest of the of the network one more thing you can have a dance layer here and then you can follow the um then you can convert that to probability model or you can directly have a dense layer along with the activation as softmax and then your model will be directly ready for predictions so activation is equal to for example softmax softmax yeah so that's your model um I mean you can change the model but that's how you can define a model for convolutional neural network uh let's run it and hope that it works fine yeah it's perfectly okay next what we do is we actually compile our models so CNN model is equal dot compile compile and here we have to Define our Optimizer our loss our Matrix we can change we can I mean we can have a lot of choices here so Optimizer let's say the optimizer is atom again um the loss let's say loss is uh categorical cross entropy get a categorical cross entropy and the metric that we are going to Define is let's say the accuracy Matrix so yeah is let's say accuracy we can Define multiple matrices here that's why it receives a list so IQ or AC so that's the E Matrix let's compile our model if it works fine yeah and then we fit our model so CNN model dot fit and it receives X TR now this ytr actually should be given uh here for this particular case it should be given after converting it to categorical form that's a one hot encoding that is required so let's convert that tf.kiras dot utils dot uh two categorical and let's convert this y TR to categorical here we have to define the number of epochs so appbox let's say is let's say 10 and if everything is fine the model should run so let's see yeah the epochs are running it will take some time and the reason is these convolutional layers uh and the pooling layers they they are actually expensive so yeah it is taking time uh I guess a lot of time well let's wait let's wait until it finishes okay so now Epoch two Epoch 2 is running now um we have to wait until it finishes so so that's the third Epoch one thing that you might notice here the loss as the number of epochs go more and more the loss should decrease and the accuracy it should increase uh or at least stay there so yeah it's Fourth Epoch now it may take a few minutes now maybe I should maybe I should pause the video and once the all the 10 epochs they are over I should come back then uh just to not increase the length of the video so let me let me pause the video and come back then after all these epochs they're done okay so we are done here with 10 epochs um and here you can notice the loss actually gradually goes down and the accuracy actually gradually grow goes up uh care must be taken you may tend to increase the number of epochs just to increase the accuracy and decrease the loss but then uh there might be a problem of overfitting although you have cared about overfitting here in Dropout there are other ways of uh getting work around with overfitting like early stopping and stuff like so so these are set of techniques that must be uh taken care of so after after this uh the rest of the story is exactly the same our model is trained we have to predict so let's predict um why predict is basically um CNN model CNN model dot uh predict remember here we deployed the softmax layer directly inside the model so we need not to convert that to probability models separately in the previous video we actually converted um we actually converted the to the previous model to to the probability model explicitly after applying the softmax layer by not deploying the softmax layer in the last layer here we actually have deployed the softmax layer inside so the probability model is already there so we directly call the predict function and we predict this on X test so X TSD hopefully it should predict um so it is making predictions so let me take yeah so predictions are ready next we test some of our uh the performance of our model on some of the uh testing examples so let's say I is equal to let's say 100 um class label class label is let's say np.org oh we haven't loaded NP

so let's let's load NP here import numpy as NP so yeah so that will work so NP dot um ARG Max y predict um I so that's our class label so uh the n and here we print basically the um the class name for this class label and then we actually print the actual label so print actually build CN y d s d of I so yeah so the actual label will be printed below and the predicted label will be printed on top for the ith example so let's see um okay so we predicted dress and the actual label was stress so let's play with that let's go for 20 and run this the actual label is pullover or we predicted pullover and the actual labeler is also pulled over so we can go to for example some other value maybe um 50 or something or maybe maybe we generated this value randomly and run over also so so far we are doing very well um yeah so let's say we have some random Maybe NP dot random dot Rand Maybe and we may generate that variant by the length of Y TR test that will give us some random integer and for that let's check ankle boot ankle boot uh let's generate again something randomly again ankle boot um pullover gold oh that's the first that's the first uh uh mistake that we found right now the actual label is code and um uh we classified our model classified it as pullover let's run it again and see press um code yeah it's working almost fine again um I mean this is not a good way to test the accuracy this is just by eyeballing how it is performing but that's what the overall structure is so what we have learned so far is that the convolutional neural network is really simple to implement in transit flow just by adding some convolutional layers and pooling layers maybe some dropouts inside and the rest of the story is exactly the same as you have seen earlier uh one difference is that you have to set your dimensions of your array consistent with the consistent with the spatial structure that the convolutional neural network actually requires so yeah that's about it in this particular video series we basically introduced the reader the beginner the viewer to tensorflow and give a couple of examples to quickly Implement um neural networks and convolutional neural networks in tensorflow but tensorflow is much more than that you can actually have recurrent neural networks you can do encoder decoder networks you can do dimensionality reduction uh you can build generative adversarial networks here in tensorflow and much more there there are visualizations using tensorboard and I mean the tensorflow is a whole kind of uh I would say it's it's a it's a whole universe I mean for deep learning if you explore this more and more more and more you will get more and more fruitful features of it so this particular video was intended to the very beginner and was supposed to give a quick start for tensorflow even if you don't know how to install tensorflow just go to Google collab and just write your Network and you're ready to go foreign

2023-05-17 04:48

Show Video

Other news