Alex Egg,

These are my notes takes from: http://cs231n.github.io/optimization-1/#gradcompute I copied the page here and inserted my notes and commentary.

Given a loss/cost function that we want to minimize, there are 2 ways that we can find the gradient, or partial derivative: numerically or analytically.

The numerical method employs basic calculus to take a derivate of an arbitrary function. Compared to the analytical method which uses heuristics derived from the given cost function to evaluate the gradient beforehand and hardcode into the routine. The numerical method is an approximation and more computational expensive. The analytical method is more error prone but faster.

This a general purpose routine, that employs basic calculus to take a derivate of an arbitrary function.

The formula given above allows us to compute the gradient numerically. Here is a generic function that takes a function f, a vector x to evaluate the gradient on, and returns the gradient of f at x:

def eval_numerical_gradient(f, x):
"""
a naive implementation of numerical gradient of f at x
- f should be a function that takes a single argument
- x is the point (numpy array) to evaluate the gradient at
"""

fx = f(x) # evaluate function value at original point
h = 0.00001

# iterate over all indexes in x
while not it.finished:

# evaluate function at x+h
ix = it.multi_index
old_value = x[ix]
x[ix] = old_value + h # increment by h
fxh = f(x) # evalute f(x + h)
x[ix] = old_value # restore to previous value (very important!)

# compute the partial derivative
grad[ix] = (fxh - fx) / h # the slope
it.iternext() # step to next dimension



Following the gradient formula we gave above, the code above iterates over all dimensions one by one, makes a small change h along that dimension and calculates the partial derivative of the loss function along that dimension by seeing how much the function changed. The variable grad holds the full gradient in the end.

We can use the function given above to compute the gradient at any point and for any function. Lets compute the gradient for the CIFAR-10 loss function at some random point in the weight space:

# to use the generic code above we want a function that takes a single argument
# (the weights in our case) so we close over X_train and Y_train
def CIFAR10_loss_fun(W):
return L(X_train, Y_train, W)

W = np.random.rand(10, 3073) * 0.001 # random weight vector


The gradient tells us the slope of the loss function along every dimension, which we can use to make an update:

loss_original = CIFAR10_loss_fun(W) # the original loss
print 'original loss: %f' % (loss_original, )

# lets see the effect of multiple step sizes
step_size = 10 ** step_size_log
W_new = W - step_size * df # new position in the weight space
loss_new = CIFAR10_loss_fun(W_new)
print 'for step size %f new loss: %f' % (step_size, loss_new)

# prints:
# original loss: 2.200718
# for step size 1.000000e-10 new loss: 2.200652
# for step size 1.000000e-09 new loss: 2.200057
# for step size 1.000000e-08 new loss: 2.194116
# for step size 1.000000e-07 new loss: 2.135493
# for step size 1.000000e-06 new loss: 1.647802
# for step size 1.000000e-05 new loss: 2.844355
# for step size 1.000000e-04 new loss: 25.558142
# for step size 1.000000e-03 new loss: 254.086573
# for step size 1.000000e-02 new loss: 2539.370888
# for step size 1.000000e-01 new loss: 25392.214036


Update in negative gradient direction. In the code above, notice that to compute W_new we are making an update in the negative direction of the gradient df since we wish our loss function to decrease, not increase.

Effect of step size. The gradient tells us the direction in which the function has the steepest rate of increase, but it does not tell us how far along this direction we should step. As we will see later in the course, choosing the step size (also called the learning rate) will become one of the most important (and most headache-inducing) (See case study below for example) hyperparameter settings in training a neural network. In our blindfolded hill-descent analogy, we feel the hill below our feet sloping in some direction, but the step length we should take is uncertain. If we shuffle our feet carefully we can expect to make consistent but very small progress (this corresponds to having a small step size). Conversely, we can choose to make a large, confident step in an attempt to descend faster, but this may not pay off. As you can see in the code example above, at some point taking a bigger step gives a higher loss as we “overstep”.

This is simply taking the gradient by hand and hard-coding it into your application. For example if this is a linear regression, you take the gradient of the sum of squares cost function, or if this is a logistic regression take the gradient of the log-loss cost function. Each gradient will be different depending on which cost function you are using. Compare this to the numerical method which is general purpose and can handle any cost function. I think the analytical method is generally preferred in practice, b/c it is cheaper computationally. We can use both methods to validate each other.

Now that we have gradient function, we can use the descent algorithm to find the W vector that minimizes the cost function.

Now that we can compute the gradient of the loss function, the procedure of repeatedly evaluating the gradient and then performing a parameter update is called Gradient Descent. Its vanilla version looks as follows:

while True:
weights += - step_size * weights_grad # perform parameter update


This simple loop is at the core of all Neural Network libraries. There are other ways of performing the optimization (e.g. LBFGS), but Gradient Descent is currently by far the most common and established way of optimizing Neural Network loss functions.

Mini-batch gradient descent. In large-scale applications (such as the ILSVRC challenge), the training data can have on order of millions of examples. Hence, it seems wasteful to compute the full loss function over the entire training set in order to perform only a single parameter update. A very common approach to addressing this challenge is to compute the gradient over batches of the training data. For example, in current state of the art ConvNets, a typical batch contains 256 examples from the entire training set of 1.2 million. This batch is then used to perform a parameter update:

while True:
data_batch = sample_training_data(data, 256) # sample 256 examples
weights += - step_size * weights_grad # perform parameter update


The reason this works well is that the examples in the training data are correlated. To see this, consider the extreme case where all 1.2 million images in ILSVRC are in fact made up of exact duplicates of only 1000 unique images (one for each class, or in other words 1200 identical copies of each image). Then it is clear that the gradients we would compute for all 1200 identical copies would all be the same, and when we average the data loss over all 1.2 million images we would get the exact same loss as if we only evaluated on a small subset of 1000. In practice of course, the dataset would not contain duplicate images, the gradient from a mini-batch is a good approximation of the gradient of the full objective. Therefore, much faster convergence can be achieved in practice by evaluating the mini-batch gradients to perform more frequent parameter updates.

The extreme case of this is a setting where the mini-batch contains only a single example. This process is called Stochastic Gradient Descent (SGD) (or also sometimes on-line gradient descent). This is relatively less common to see because in practice due to vectorized code optimizations it can be computationally much more efficient to evaluate the gradient for 100 examples, than the gradient for one example 100 times. Even though SGD technically refers to using a single example at a time to evaluate the gradient, you will hear people use the term SGD even when referring to mini-batch gradient descent (i.e. mentions of MGD for “Minibatch Gradient Descent”, or BGD for “Batch gradient descent” are rare to see), where it is usually assumed that mini-batches are used. The size of the mini-batch is a hyperparameter but it is not very common to cross-validate it. It is usually based on memory constraints (if any), or set to some value, e.g. 32, 64 or 128. We use powers of 2 in practice because many vectorized operation implementations work faster when their inputs are sized in powers of 2.

### SDG Practical Discussion

Batch gradient descent computes the gradient using the whole dataset. This is great for convex, or relatively smooth error manifolds. In this case, we move somewhat directly towards an optimum solution, either local or global. Additionally, batch gradient descent, given an annealed learning rate, will eventually find the minimum located in it’s basin of attraction.

Stochastic gradient descent (SGD) computes the gradient using a single sample. Most applications of SGD actually use a minibatch of several samples. SGD works well (Not well, I suppose, but better than batch gradient descent) for error manifolds that have lots of local maxima/minima. In this case, the somewhat noisier gradient calculated using the reduced number of samples tends to jerk the model out of local minima into a region that hopefully is more optimal. Single samples are really noisy, while minibatches tend to average a little of the noise out. Thus, the amount of jerk is reduced when using minibatches. A good balance is struck when the minibatch size is small enough to avoid some of the poor local minima, but large enough that it doesn’t avoid the global minima or better-performing local minima. (Incidently, this assumes that the best minima have a larger and deeper basin of attraction, and are therefore easier to fall into.)

One benefit of SGD is that it’s computationally a whole lot faster. Large datasets often can’t be held in RAM, which makes vectorization much less efficient. Rather, each sample or batch of samples must be loaded, worked with, the results stored, and so on. Minibatch SGD, on the other hand, is usually intentionally made small enough to be computationally tractable.

Usually, this computational advantage is leveraged by performing many more iterations of SGD, making many more steps than conventional batch gradient descent. This usually results in a model that is very close to that which would be found via batch gradient descent, or better.

The way I like to think of how SGD works is to imagine that I have one point that represents my input distribution. My model is attempting to learn that input distribution. Surrounding the input distribution is a shaded area that represents the input distributions of all of the possible minibatches I could sample. It’s usually a fair assumption that the minibatch input distributions are close in proximity to the true input distribution. Batch gradient descent, at all steps, takes the steepest route to reach the true input distribution. SGD, on the other hand, chooses a random point within the shaded area, and takes the steepest route towards this point. At each iteration, though, it chooses a new point. The average of all of these steps will approximate the true input distribution, usually quite well.

## Case Study: Practical Numeric vs. Analytic Gradient

There are many software packages that can numerical evaluate your gradients for you: matlab, Tensorflow and Theano, etc. The benefit of these packages is that you have less boilerplate code in general and no complicated analytic derivatives/gradients to hand code. Each of these respective packages implements numerical gradients in different ways. The details of these implementations should be known as they have important implications as we found out in our project, which is described below:

While working on our grad project at UCSD, w/ my colleges for project sharknado: http://sharknado.eggie5.com we were dogged by the implication of using a numeric gradient for a long while. Our first goal was to match the baseline set in the VBPR paper: https://arxiv.org/abs/1510.01784. However, we were having trouble getting our model to learn anything at all, and spent weeks tuning Stochastic Gradient Descent learning rates and other hyper-params of our model to no avail. We were perplexed as the c++ reference and our code was pretty much identical. The c++ converged very easily but ours would not. We soon found what the difference was: the c++ code had an analytic gradient and our TensorFlow python code had a numeric gradient.

The reason why the SGD w/ the numeric gradient didn’t work in our case, is because we are operating in a high-dimensional space and our loss function has a large number of parameters and since the numeric solution is trying to optimize all the variables at once, it is easy to get stuck in a global minima if your parameter initiation and learning rate aren’t tuned perfectly.

The reason why SGD w/ the analytic gradient works is because it is a closed form solution of the gradient, whereas the numeric version is a an approximation thereof. With a closed-form solution, assuming the properties of convexity are upheld, we are guaranteed to find the global minima and not get stuck in a local minima, because it is optimizing each variable separately.

This impasse led us to experiment w/ more sophisticated numeric optimizers in the TensorFlow package, especially, the adaptive optimizers. The first one we tried: Adam (https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) was an instant success as if it gets stuck in al local minima it goes back a step and corrects the learning rate/initiation.