Quick guide to implement a Machine Learning model

Soheyb Merah
3 min readOct 15, 2020

--

You did study the theories, you do understand how it works, and you know what you want to do, the only thing that do remain is the coding part, the implementation of your idea.

If you do know python amazing! else you should learn it to follow along with this article.

We will build an image classification using the fashion MNIST dataset.

To focus more on coding we will use google colab.

The first thing to do is to download the datasets, then import our libraries like any other python project the new thing here is the enable_eager_execution function.
Eager Execution is a nifty approach in TensorFlow (TF) to build deep learning models from scratch. It allows to build prototype models without using graphical approach it means that there is no need to start a graph session in order to perform tensor computations. To allow faster debugging, and i could check each line of computation on-the-fly without needing to wrap the computation in a graph session.

The we load our datasets into memory and here we have the choice of adding a validation part or not as the origin datasets do only provide a train and a test data, if you wish to skip it you may un-comment the 5th line and comment both 7th and 8th line else keep it as it is, since we don’t have a dedicated part for validation we will make one, by splitting the training set to 55000–5000 (you may chose other ranges), finally we set our labels keep in mind that the order is very important!.

To get confused with number we will assign variables to them, if you did skip the validation part you may delete the 3rd line and do:

num_train_examples = metadata.splits[‘train’].num_examples

Now we normalize our data by reducing the range from 0–255 to 0–1

Then we build a schema of our neural network using Convolutional neural network, Batch normalization And Max pooling as for active function we have relu and softmax, the reason why we had 10 in last layer is because we have 10 categories, the summary function will display the summary of the schema and the number of Trainable params, Non-Trainable params.

Compile means configures the model for training setting optimizer and loss algorithmes, log dir will store the model results to be used in check statistic section gathered from tensorboard callback

Training the model and fit function trains the model for a fixed number of epochs (iterations on a dataset) and do return a History object for plotting the results

We train our model by passing all the define variables

We plot our final results

Finally we save the model, convert it to tflite type and check its metadata

That’s it, now you may use the tflite in your android application for example, or even export your model for further improvements.

--

--

Soheyb Merah
Soheyb Merah

Written by Soheyb Merah

Computer Science Student enthusiast, who loves trying out new stuff, and contributing in Open Source Project, as I belive that teamwork is the key to success.

No responses yet