SGD: MNIST — Putting it all together

Original Source Here

SGD: MNIST — Putting it all together

There are 7 steps to train/get a model in deep learning like this chart:

We now put it with the SGD together and look at them step by step:

Step 1: Initialize

In this step, we initialize our parameters with random values and tell PyTorch that we want to track their gradients:

We will do the following things in this step:

  • Set up the environment and libraries
  • Download the images and set up the path
  • Put the images to datasets(Train/Validation)
  • Create the Dataloaders with datasets
  • Convert the data to a NumPy array or a PyTorch tensor
  • Initialize the parameters with random values

Step 2: Predict

In this step,we will calculate the predictions to see how close our predictions to our targets. The code will like this

preds = f(time, params)

Step 3: Calculate the Loss

In this step, can change the weight by a little in the direction of the slope, calculate the loss and adjustment again, and repeat this a few times. We will get to the lowest point on the curve. We can use “mse”or “l1”to calculate.

“mse”stands for *mean squared error*, and “l1” refers to the standard mathematical jargon for *mean absolute value* (in math it’s called the *L1 norm*).

The code will like this:

loss = mse(preds, speed)

Step 4: Calculate the gradients

In this step, we find the weights to make the loss and calculate the gradients. We can by use loss.backward module. The code will like this:

loss.backward()

Step 5: Step the weights

In this step, we will update the weights and biases based on the gradient and learning rate. We put that all in a function and test it. The code will like this

def apply_step(params, prn=True):

preds = f(time, params)

loss = mse(preds, speed)

loss.backward()

params.data -= lr * params.grad.data

params.grad = None

if prn: print(loss.item())

return preds

We look at the accuracy to see whether it is improved.

Step 6: Repeat the process — — Optimizer

In this step, we loop and perform many improvements to reach a good result of accuracy. We’ve created a general-purpose foundation we can build on. Then we will be to create an object that will handle the SGD step for us to repeat the process. We can create optimizer and learner to replace our linear model with a neural network. In PyTorch we can use nn.Linear and learner module.

We pass in all the elements: the `DataLoaders`, the model, the optimization function (which will be passed the parameters), the loss function, and optionally any metrics to print to create a `Learner` and use fit()function to view the results.

Step 7: STOP

In this step, we will decide where to stop and get our “best”model. We would keep training until the accuracy of the model started getting worse, or we ran out of time. We will watch the training and validation losses and our metrics to decide when to stop.

Conclusion

By these steps, we will train our deep learning model with SGD easier, since there are convenient pre-packaged pieces in fastai

AI/ML

Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot

%d bloggers like this: