My book “Deep Learning from first principles” now on Amazon

Note: The 2nd edition of this book is now available on Amazon

My 4th book(self-published), “Deep Learning from first principles – In vectorized Python, R and Octave” (557 pages), is now available on Amazon in both paperback ($18.99) and kindle ($9.99/Rs449). The book starts with the most primitive 2-layer Neural Network and works  its way to a generic L-layer Deep Learning Network, with all the bells and whistles.  The book includes detailed derivations and vectorized implementations in Python, R and Octave.  The code has been extensively  commented and has been included in the Appendix section.

Pick up your copy today!!!

My other books
1. Practical Machine Learning with R and Python
2. Beaten by sheer pace – Cricket analytics with yorkr
3. Cricket analytics with cricketr

Deep Learning from first principles in Python, R and Octave – Part 8

1. Introduction

You don’t understand anything until you learn it more than one way. Marvin Minsky
No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either. Marvin Minsky
A wealth of information creates a poverty of attention. Herbert Simon

This post, Deep Learning from first Principles in Python, R and Octave-Part8, is my final post in my Deep Learning from first principles series. In this post, I discuss and implement a key functionality needed while building Deep Learning networks viz. ‘Gradient Checking’. Gradient Checking is an important method to check the correctness of your implementation, specifically the forward propagation and the backward propagation cycles of an implementation. In addition I also discuss some tips for tuning hyper-parameters of a Deep Learning network based on my experience.

My post in this  ‘Deep Learning Series’ so far were
1. Deep Learning from first principles in Python, R and Octave – Part 1 In part 1, I implement logistic regression as a neural network in vectorized Python, R and Octave
2. Deep Learning from first principles in Python, R and Octave – Part 2 In the second part I implement a simple Neural network with just 1 hidden layer and a sigmoid activation output function
3. Deep Learning from first principles in Python, R and Octave – Part 3 The 3rd part implemented a multi-layer Deep Learning Network with sigmoid activation output in vectorized Python, R and Octave
4. Deep Learning from first principles in Python, R and Octave – Part 4 The 4th part deals with multi-class classification. Specifically, I derive the Jacobian of the Softmax function and enhance my L-Layer DL network to include Softmax output function in addition to Sigmoid activation
5. Deep Learning from first principles in Python, R and Octave – Part 5 This post uses the Softmax classifier implemented to classify MNIST digits using a L-layer Deep Learning network
6. Deep Learning from first principles in Python, R and Octave – Part 6 The 6th part adds more bells and whistles to my L-Layer DL network, by including different initialization types namely He and Xavier. Besides L2 Regularization and random dropout is added.
7. Deep Learning from first principles in Python, R and Octave – Part 7 The 7th part deals with Stochastic Gradient Descent Optimization methods including momentum, RMSProp and Adam
8. Deep Learning from first principles in Python, R and Octave – Part 8 – This post implements a critical function for ensuring the correctness of a L-Layer Deep Learning network implementation using Gradient Checking

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

You may also like my companion book “Practical Machine Learning with R and Python- Machine Learning in stereo” available in Amazon in paperback($9.99) and Kindle($6.99) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.

Gradient Checking is based on the following approach. One iteration of Gradient Descent computes and updates the parameters \theta by doing
\theta := \theta - \frac{d}{d\theta}J(\theta).
To minimize the cost we will need to minimize J(\theta)
Let g(\theta) be a function that computes the derivative \frac {d}{d\theta}J(\theta). Gradient Checking allows us to numerically evaluate the implementation of the function g(\theta) and verify its correctness.
We know the derivative of a function is given by
\frac {d}{d\theta}J(\theta) = lim->0 \frac {J(\theta +\epsilon) - J(\theta -\epsilon)} {2*\epsilon}
Note: The above derivative is based on the 2 sided derivative. The 1-sided derivative  is given by \frac {d}{d\theta}J(\theta) = lim->0 \frac {J(\theta +\epsilon) - J(\theta)} {\epsilon}
Gradient Checking is based on the 2-sided derivative because the error is of the order O(\epsilon^{2}) as opposed O(\epsilon) for the 1-sided derivative.
Hence Gradient Check uses the 2 sided derivative as follows.
g(\theta) = lim->0 \frac {J(\theta +\epsilon) - J(\theta -\epsilon)} {2*\epsilon}

In Gradient Check the following is done
A) Run one normal cycle of your implementation by doing the following
a) Compute the output activation by running 1 cycle of forward propagation
b) Compute the cost using the output activation
c) Compute the gradients using backpropation (grad)

B) Perform gradient check steps as below
a) Set \theta . Flatten all ‘weights’ and ‘bias’ matrices and vectors to a column vector.
b) Initialize \theta+ by bumping up \theta by adding \epsilon (\theta + \epsilon)
c) Perform forward propagation with \theta+
d) Compute cost with \theta+ i.e. J(\theta+)
e) Initialize  \theta- by bumping down \theta by subtracting \epsilon (\theta - \epsilon)
f) Perform forward propagation with \theta-
g) Compute cost with \theta- i.e.  J(\theta-)
h) Compute \frac {d} {d\theta} J(\theta) or ‘gradapprox’ as\frac {J(\theta+) - J(\theta-) } {2\epsilon} using the 2 sided derivative.
i) Compute L2norm or the Euclidean distance between ‘grad’ and ‘gradapprox’. If the
diference is of the order of 10^{-5} or 10^{-7} the implementation is correct. In the Deep Learning Specialization Prof Andrew Ng mentions that if the difference is of the order of 10^{-7} then the implementation is correct. A difference of 10^{-5} is also ok. Anything more than that is a cause of worry and you should look at your code more closely. To see more details click Gradient checking and advanced optimization

You can clone/download the code from Github at DeepLearning-Part8

After spending a better part of 3 days, I now realize how critical Gradient Check is for ensuring the correctness of you implementation. Initially I was getting very high difference and did not know how to understand the results or debug my implementation. After many hours of staring at the results, I  was able to finally arrive at a way, to localize issues in the implementation. In fact, I did catch a small bug in my Python code, which did not exist in the R and Octave implementations. I will demonstrate this below

1.1a Gradient Check – Sigmoid Activation – Python

import numpy as np
import matplotlib

exec(open("DLfunctions8.py").read())
exec(open("testcases.py").read())
#Load the data
train_X, train_Y, test_X, test_Y = load_dataset()
#Set layer dimensions
layersDimensions = [2,4,1]  
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
AL, caches, dropoutMat = forwardPropagationDeep(train_X, parameters, keep_prob=1, hiddenActivationFunc="relu",outputActivationFunc="sigmoid")
#Compute cost
cost = computeCost(AL, train_Y, outputActivationFunc="sigmoid") 
print("cost=",cost)
#Perform backprop and get gradients
gradients = backwardPropagationDeep(AL, train_Y, caches, dropoutMat, lambd=0, keep_prob=1,                                   hiddenActivationFunc="relu",outputActivationFunc="sigmoid")

epsilon = 1e-7
outputActivationFunc="sigmoid"

# Set-up variables
# Flatten parameters to a vector
parameters_values, _ = dictionary_to_vector(parameters)
#Flatten gradients to a vector
grad = gradients_to_vector(parameters,gradients)
num_parameters = parameters_values.shape[0]
#Initialize
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))

# Compute gradapprox using 2 sided derivative
for i in range(num_parameters):
    # Compute J_plus[i]. 
    thetaplus = np.copy(parameters_values)                                   
    thetaplus[i][0] = thetaplus[i][0] + epsilon                                 
    AL, caches, dropoutMat = forwardPropagationDeep(train_X, vector_to_dictionary(parameters,thetaplus), keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)
    J_plus[i] = computeCost(AL, train_Y, outputActivationFunc=outputActivationFunc) 
    
    
    # Compute J_minus[i]. 
    thetaminus = np.copy(parameters_values)                                     
    thetaminus[i][0] = thetaminus[i][0] - epsilon                                     
    AL, caches, dropoutMat  = forwardPropagationDeep(train_X, vector_to_dictionary(parameters,thetaminus), keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)     
    J_minus[i] = computeCost(AL, train_Y, outputActivationFunc=outputActivationFunc)                            
       
    # Compute gradapprox[i]   
    gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon)

# Compare gradapprox to backward propagation gradients by computing difference. 
numerator = np.linalg.norm(grad-gradapprox)                                           
denominator = np.linalg.norm(grad) +  np.linalg.norm(gradapprox)                                         
difference =  numerator/denominator                                        

#Check the difference
if difference > 1e-5:
    print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
    print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
print(difference)
print("\n")    
# The technique below can be used to identify 
# which of the parameters are in error
# Covert grad to dictionary
m=vector_to_dictionary2(parameters,grad)
print("Gradients from backprop")
print(m)
print("\n")
# Convert gradapprox to dictionary
n=vector_to_dictionary2(parameters,gradapprox)
print("Gradapprox from gradient check")
print(n)
## (300, 2)
## (300,)
## cost= 0.6931455556341791
## [92mYour backward propagation works perfectly fine! difference = 1.1604150683743381e-06[0m
## 1.1604150683743381e-06
## 
## 
## Gradients from backprop
## {'dW1': array([[-6.19439955e-06, -2.06438046e-06],
##        [-1.50165447e-05,  7.50401672e-05],
##        [ 1.33435433e-04,  1.74112143e-04],
##        [-3.40909024e-05, -1.38363681e-04]]), 'db1': array([[ 7.31333221e-07],
##        [ 7.98425950e-06],
##        [ 8.15002817e-08],
##        [-5.69821155e-08]]), 'dW2': array([[2.73416304e-04, 2.96061451e-04, 7.51837363e-05, 1.01257729e-04]]), 'db2': array([[-7.22232235e-06]])}
## 
## 
## Gradapprox from gradient check
## {'dW1': array([[-6.19448937e-06, -2.06501483e-06],
##        [-1.50168766e-05,  7.50399742e-05],
##        [ 1.33435485e-04,  1.74112391e-04],
##        [-3.40910633e-05, -1.38363765e-04]]), 'db1': array([[ 7.31081862e-07],
##        [ 7.98472399e-06],
##        [ 8.16013923e-08],
##        [-5.71764858e-08]]), 'dW2': array([[2.73416290e-04, 2.96061509e-04, 7.51831930e-05, 1.01257891e-04]]), 'db2': array([[-7.22255589e-06]])}

1.1b Gradient Check – Softmax Activation – Python (Error!!)

In the code below I show, how I managed to spot a bug in your implementation

import numpy as np
exec(open("DLfunctions8.py").read())
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j


# Plot the data
#plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
layersDimensions = [2,3,3]
y1=y.reshape(-1,1).T
train_X=X.T
train_Y=y1

parameters = initializeDeepModel(layersDimensions)
#Compute forward prop
AL, caches, dropoutMat = forwardPropagationDeep(train_X, parameters, keep_prob=1, 
                                                hiddenActivationFunc="relu",outputActivationFunc="softmax")
#Compute cost
cost = computeCost(AL, train_Y, outputActivationFunc="softmax") 
print("cost=",cost)
#Compute gradients from backprop
gradients = backwardPropagationDeep(AL, train_Y, caches, dropoutMat, lambd=0, keep_prob=1, 
                                    hiddenActivationFunc="relu",outputActivationFunc="softmax")
# Note the transpose of the gradients for Softmax has to be taken
L= len(parameters)//2
print(L)
gradients['dW'+str(L)]=gradients['dW'+str(L)].T
gradients['db'+str(L)]=gradients['db'+str(L)].T
# Perform gradient check
gradient_check_n(parameters, gradients, train_X, train_Y, epsilon = 1e-7,outputActivationFunc="softmax")

cost= 1.0986187818144022
2
There is a mistake in the backward propagation! difference = 0.7100295155692544
0.7100295155692544


Gradients from backprop
{'dW1': array([[ 0.00050125,  0.00045194],
       [ 0.00096392,  0.00039641],
       [-0.00014276, -0.00045639]]), 'db1': array([[ 0.00070082],
       [-0.00224399],
       [ 0.00052305]]), 'dW2': array([[-8.40953794e-05, -9.52657769e-04, -1.10269379e-04],
       [-7.45469382e-04,  9.49795606e-04,  2.29045434e-04],
       [ 8.29564761e-04,  2.86216305e-06, -1.18776055e-04]]), 
     'db2': array([[-0.00253808],
       [-0.00505508],
       [ 0.00759315]])}


Gradapprox from gradient check
{'dW1': array([[ 0.00050125,  0.00045194],
       [ 0.00096392,  0.00039641],
       [-0.00014276, -0.00045639]]), 'db1': array([[ 0.00070082],
       [-0.00224399],
       [ 0.00052305]]), 'dW2': array([[-8.40960634e-05, -9.52657953e-04, -1.10268461e-04],
       [-7.45469242e-04,  9.49796908e-04,  2.29045671e-04],
       [ 8.29565305e-04,  2.86104473e-06, -1.18776100e-04]]), 
     'db2': array([[-8.46211989e-06],
       [-1.68487446e-05],
       [ 2.53108645e-05]])}

Gradient Check gives a high value of the difference of 0.7100295. Inspecting the Gradients and Gradapprox we can see there is a very big discrepancy in db2. After I went over my code I discovered that I my computation in the function layerActivationBackward for Softmax was

 
   # Erroneous code
   if activationFunc == 'softmax':
        dW = 1/numtraining * np.dot(A_prev,dZ)
        db = np.sum(dZ, axis=0, keepdims=True)
        dA_prev = np.dot(dZ,W)
instead of
   # Fixed code
   if activationFunc == 'softmax':
        dW = 1/numtraining * np.dot(A_prev,dZ)
        db = 1/numtraining *  np.sum(dZ, axis=0, keepdims=True)
        dA_prev = np.dot(dZ,W)

After fixing this error when I ran Gradient Check I get

1.1c Gradient Check – Softmax Activation – Python (Corrected!!)

import numpy as np
exec(open("DLfunctions8.py").read())
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j


# Plot the data
#plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
layersDimensions = [2,3,3]
y1=y.reshape(-1,1).T
train_X=X.T
train_Y=y1
#Set layer dimensions
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
AL, caches, dropoutMat = forwardPropagationDeep(train_X, parameters, keep_prob=1, 
                                                hiddenActivationFunc="relu",outputActivationFunc="softmax")
#Compute cost
cost = computeCost(AL, train_Y, outputActivationFunc="softmax") 
print("cost=",cost)
#Compute gradients from backprop
gradients = backwardPropagationDeep(AL, train_Y, caches, dropoutMat, lambd=0, keep_prob=1, 
                                    hiddenActivationFunc="relu",outputActivationFunc="softmax")
# Note the transpose of the gradients for Softmax has to be taken
L= len(parameters)//2
print(L)
gradients['dW'+str(L)]=gradients['dW'+str(L)].T
gradients['db'+str(L)]=gradients['db'+str(L)].T
#Perform gradient check
gradient_check_n(parameters, gradients, train_X, train_Y, epsilon = 1e-7,outputActivationFunc="softmax")
## cost= 1.0986193170234435
## 2
## [92mYour backward propagation works perfectly fine! difference = 5.268804859613151e-07[0m
## 5.268804859613151e-07
## 
## 
## Gradients from backprop
## {'dW1': array([[ 0.00053206,  0.00038987],
##        [ 0.00093941,  0.00038077],
##        [-0.00012177, -0.0004692 ]]), 'db1': array([[ 0.00072662],
##        [-0.00210198],
##        [ 0.00046741]]), 'dW2': array([[-7.83441270e-05, -9.70179498e-04, -1.08715815e-04],
##        [-7.70175008e-04,  9.54478237e-04,  2.27690198e-04],
##        [ 8.48519135e-04,  1.57012608e-05, -1.18974383e-04]]), 'db2': array([[-8.52190476e-06],
##        [-1.69954294e-05],
##        [ 2.55173342e-05]])}
## 
## 
## Gradapprox from gradient check
## {'dW1': array([[ 0.00053206,  0.00038987],
##        [ 0.00093941,  0.00038077],
##        [-0.00012177, -0.0004692 ]]), 'db1': array([[ 0.00072662],
##        [-0.00210198],
##        [ 0.00046741]]), 'dW2': array([[-7.83439980e-05, -9.70180603e-04, -1.08716369e-04],
##        [-7.70173925e-04,  9.54478718e-04,  2.27690089e-04],
##        [ 8.48520143e-04,  1.57018842e-05, -1.18973720e-04]]), 'db2': array([[-8.52096171e-06],
##        [-1.69964043e-05],
##        [ 2.55162558e-05]])}

1.2a Gradient Check – Sigmoid Activation – R

source("DLfunctions8.R")
z <- as.matrix(read.csv("circles.csv",header=FALSE)) 

x <- z[,1:2]
y <- z[,3]
X <- t(x)
Y <- t(y)
#Set layer dimensions
layersDimensions = c(2,5,1)
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
retvals = forwardPropagationDeep(X, parameters,keep_prob=1, hiddenActivationFunc="relu",
                                 outputActivationFunc="sigmoid")
AL <- retvals[['AL']]
caches <- retvals[['caches']]
dropoutMat <- retvals[['dropoutMat']]
#Compute cost
cost <- computeCost(AL, Y,outputActivationFunc="sigmoid",
                    numClasses=layersDimensions[length(layersDimensions)])
print(cost)
## [1] 0.6931447
# Backward propagation.
gradients = backwardPropagationDeep(AL, Y, caches, dropoutMat, lambd=0, keep_prob=1, hiddenActivationFunc="relu",
                                    outputActivationFunc="sigmoid",numClasses=layersDimensions[length(layersDimensions)])
epsilon = 1e-07
outputActivationFunc="sigmoid"
#Convert parameter list to vector
parameters_values = list_to_vector(parameters)
#Convert gradient list to vector
grad = gradients_to_vector(parameters,gradients)
num_parameters = dim(parameters_values)[1]
#Initialize
J_plus = matrix(rep(0,num_parameters),
                nrow=num_parameters,ncol=1)
J_minus = matrix(rep(0,num_parameters),
                 nrow=num_parameters,ncol=1)
gradapprox = matrix(rep(0,num_parameters),
                    nrow=num_parameters,ncol=1)

# Compute gradapprox
for(i in 1:num_parameters){
    # Compute J_plus[i]. 
    thetaplus = parameters_values                                   
    thetaplus[i][1] = thetaplus[i][1] + epsilon                                 
    retvals = forwardPropagationDeep(X, vector_to_list(parameters,thetaplus), keep_prob=1, 
                                                hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)
    
    AL <- retvals[['AL']]
    J_plus[i] = computeCost(AL, Y, outputActivationFunc=outputActivationFunc) 


   # Compute J_minus[i]. 
    thetaminus = parameters_values                                     
    thetaminus[i][1] = thetaminus[i][1] - epsilon                                     
    retvals  = forwardPropagationDeep(X, vector_to_list(parameters,thetaminus), keep_prob=1, 
                                                 hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)     
    AL <- retvals[['AL']]
    J_minus[i] = computeCost(AL, Y, outputActivationFunc=outputActivationFunc)                            

    # Compute gradapprox[i]   
    gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon)
}
# Compare gradapprox to backward propagation gradients by computing difference.
#Compute L2Norm
numerator = L2NormVec(grad-gradapprox)                                           
denominator = L2NormVec(grad) +  L2NormVec(gradapprox)                                         
difference =  numerator/denominator 
if(difference > 1e-5){
    cat("There is a mistake, the difference is too high",difference)
} else{
    cat("The implementations works perfectly", difference)
}
## The implementations works perfectly 1.279911e-06
# This can be used to check
print("Gradients from backprop")
## [1] "Gradients from backprop"
vector_to_list2(parameters,grad)
## $dW1
##               [,1]          [,2]
## [1,] -7.641588e-05 -3.427989e-07
## [2,] -9.049683e-06  6.906304e-05
## [3,]  3.401039e-06 -1.503914e-04
## [4,]  1.535226e-04 -1.686402e-04
## [5,] -6.029292e-05 -2.715648e-04
## 
## $db1
##               [,1]
## [1,]  6.930318e-06
## [2,] -3.283117e-05
## [3,]  1.310647e-05
## [4,] -3.454308e-05
## [5,] -2.331729e-08
## 
## $dW2
##              [,1]         [,2]         [,3]        [,4]         [,5]
## [1,] 0.0001612356 0.0001113475 0.0002435824 0.000362149 2.874116e-05
## 
## $db2
##              [,1]
## [1,] -1.16364e-05
print("Grad approx from gradient check")
## [1] "Grad approx from gradient check"
vector_to_list2(parameters,gradapprox)
## $dW1
##               [,1]          [,2]
## [1,] -7.641554e-05 -3.430589e-07
## [2,] -9.049428e-06  6.906253e-05
## [3,]  3.401168e-06 -1.503919e-04
## [4,]  1.535228e-04 -1.686401e-04
## [5,] -6.029288e-05 -2.715650e-04
## 
## $db1
##               [,1]
## [1,]  6.930012e-06
## [2,] -3.283096e-05
## [3,]  1.310618e-05
## [4,] -3.454237e-05
## [5,] -2.275957e-08
## 
## $dW2
##              [,1]         [,2]         [,3]         [,4]        [,5]
## [1,] 0.0001612355 0.0001113476 0.0002435829 0.0003621486 2.87409e-05
## 
## $db2
##              [,1]
## [1,] -1.16368e-05

1.2b Gradient Check – Softmax Activation – R

source("DLfunctions8.R")
Z <- as.matrix(read.csv("spiral.csv",header=FALSE)) 

# Setup the data
X <- Z[,1:2]
y <- Z[,3]
X <- t(X)
Y <- t(y)
layersDimensions = c(2, 3, 3)
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
retvals = forwardPropagationDeep(X, parameters,keep_prob=1, hiddenActivationFunc="relu",
                                 outputActivationFunc="softmax")
AL <- retvals[['AL']]
caches <- retvals[['caches']]
dropoutMat <- retvals[['dropoutMat']]
#Compute cost
cost <- computeCost(AL, Y,outputActivationFunc="softmax",
                    numClasses=layersDimensions[length(layersDimensions)])
print(cost)
## [1] 1.098618
# Backward propagation.
gradients = backwardPropagationDeep(AL, Y, caches, dropoutMat, lambd=0, keep_prob=1, hiddenActivationFunc="relu",
                                    outputActivationFunc="softmax",numClasses=layersDimensions[length(layersDimensions)])
# Need to take transpose of the last layer for Softmax
L=length(parameters)/2
gradients[[paste('dW',L,sep="")]]=t(gradients[[paste('dW',L,sep="")]])
gradients[[paste('db',L,sep="")]]=t(gradients[[paste('db',L,sep="")]])
#Perform gradient check
gradient_check_n(parameters, gradients, X, Y, 
                 epsilon = 1e-7,outputActivationFunc="softmax")
## The implementations works perfectly 3.903011e-07[1] "Gradients from backprop"
## $dW1
##              [,1]          [,2]
## [1,] 0.0007962367 -0.0001907606
## [2,] 0.0004444254  0.0010354412
## [3,] 0.0003078611  0.0007591255
## 
## $db1
##               [,1]
## [1,] -0.0017305136
## [2,]  0.0005393734
## [3,]  0.0012484550
## 
## $dW2
##               [,1]          [,2]          [,3]
## [1,] -3.515627e-04  7.487283e-04 -3.971656e-04
## [2,] -6.381521e-05 -1.257328e-06  6.507254e-05
## [3,] -1.719479e-04 -4.857264e-04  6.576743e-04
## 
## $db2
##               [,1]
## [1,] -5.536383e-06
## [2,] -1.824656e-05
## [3,]  2.378295e-05
## 
## [1] "Grad approx from gradient check"
## $dW1
##              [,1]          [,2]
## [1,] 0.0007962364 -0.0001907607
## [2,] 0.0004444256  0.0010354406
## [3,] 0.0003078615  0.0007591250
## 
## $db1
##               [,1]
## [1,] -0.0017305135
## [2,]  0.0005393741
## [3,]  0.0012484547
## 
## $dW2
##               [,1]          [,2]          [,3]
## [1,] -3.515632e-04  7.487277e-04 -3.971656e-04
## [2,] -6.381451e-05 -1.257883e-06  6.507239e-05
## [3,] -1.719469e-04 -4.857270e-04  6.576739e-04
## 
## $db2
##               [,1]
## [1,] -5.536682e-06
## [2,] -1.824652e-05
## [3,]  2.378209e-05

1.3a Gradient Check – Sigmoid Activation – Octave

source("DL8functions.m")
################## Circles
data=csvread("circles.csv");

X=data(:,1:2);
Y=data(:,3);
#Set layer dimensions
layersDimensions = [2 5  1]; #tanh=-0.5(ok), #relu=0.1 best!
[weights biases] = initializeDeepModel(layersDimensions);
#Perform forward prop
[AL forward_caches activation_caches droputMat] = forwardPropagationDeep(X', weights, biases,keep_prob=1, 
                 hiddenActivationFunc="relu", outputActivationFunc="sigmoid");
#Compute cost
cost = computeCost(AL, Y',outputActivationFunc=outputActivationFunc,numClasses=layersDimensions(size(layersDimensions)(2)));
disp(cost);
#Compute gradients from cost
[gradsDA gradsDW gradsDB] = backwardPropagationDeep(AL, Y', activation_caches,forward_caches, droputMat, lambd=0, keep_prob=1, 
                                 hiddenActivationFunc="relu", outputActivationFunc="sigmoid",
                                 numClasses=layersDimensions(size(layersDimensions)(2)));
epsilon = 1e-07;
outputActivationFunc="sigmoid";
# Convert paramter cell array to vector
parameters_values = cellArray_to_vector(weights, biases);
#Convert gradient cell array to vector
grad = gradients_to_vector(gradsDW,gradsDB);
num_parameters = size(parameters_values)(1);
#Initialize
J_plus = zeros(num_parameters, 1);
J_minus = zeros(num_parameters, 1);
gradapprox = zeros(num_parameters, 1);
# Compute gradapprox
for i = 1:num_parameters
    # Compute J_plus[i]. 
    thetaplus = parameters_values;                                   
    thetaplus(i,1) = thetaplus(i,1) + epsilon;      
    [weights1 biases1] =vector_to_cellArray(weights, biases,thetaplus);    
    [AL forward_caches activation_caches droputMat] = forwardPropagationDeep(X', weights1, biases1, keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc);
    J_plus(i) = computeCost(AL, Y', outputActivationFunc=outputActivationFunc); 
        
    # Compute J_minus[i]. 
    thetaminus = parameters_values;                                  
    thetaminus(i,1) = thetaminus(i,1) - epsilon ;    
    [weights1 biases1] = vector_to_cellArray(weights, biases,thetaminus);    
    [AL forward_caches activation_caches droputMat]  = forwardPropagationDeep(X',weights1, biases1, keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc);     
    J_minus(i) = computeCost(AL, Y', outputActivationFunc=outputActivationFunc);                            
      
    # Compute gradapprox[i]   
    gradapprox(i) = (J_plus(i) - J_minus(i))/(2*epsilon);

endfor

#Compute L2Norm
numerator = L2NormVec(grad-gradapprox);                                           
denominator = L2NormVec(grad) +  L2NormVec(gradapprox);                                         
difference =  numerator/denominator;
disp(difference);
#Check difference
if difference > 1e-04
   printf("There is a mistake in the implementation ");
   disp(difference);
else
   printf("The implementation works perfectly");
      disp(difference);
endif
[weights1 biases1] = vector_to_cellArray(weights, biases,grad);  
printf("Gradients from back propagation"); 
disp(weights1);
disp(biases1); 
[weights2 biases2] = vector_to_cellArray(weights, biases,gradapprox); 
printf("Gradients from gradient check");
disp(weights2);
disp(biases2); 
0.69315
1.4893e-005
The implementation works perfectly 1.4893e-005
Gradients from back propagation
{
[1,1] =
5.0349e-005 2.1323e-005
8.8632e-007 1.8231e-006
9.3784e-005 1.0057e-004
1.0875e-004 -1.9529e-007
5.4502e-005 3.2721e-005
[1,2] =
1.0567e-005 6.0615e-005 4.6004e-005 1.3977e-004 1.0405e-004
}
{
[1,1] =
-1.8716e-005
1.1309e-009
4.7686e-005
1.2051e-005
-1.4612e-005
[1,2] = 9.5808e-006
}
Gradients from gradient check
{
[1,1] =
5.0348e-005 2.1320e-005
8.8485e-007 1.8219e-006
9.3784e-005 1.0057e-004
1.0875e-004 -1.9762e-007
5.4502e-005 3.2723e-005
[1,2] =
[1,2] =
1.0565e-005 6.0614e-005 4.6007e-005 1.3977e-004 1.0405e-004
}
{
[1,1] =
-1.8713e-005
1.1102e-009
4.7687e-005
1.2048e-005
-1.4609e-005
[1,2] = 9.5790e-006
}

1.3b Gradient Check – Softmax Activation – Octave

source("DL8functions.m")
data=csvread("spiral.csv");

# Setup the data
X=data(:,1:2);
Y=data(:,3);
# Set the layer dimensions
layersDimensions = [2 3  3]; 
[weights biases] = initializeDeepModel(layersDimensions);
# Run forward prop
[AL forward_caches activation_caches droputMat] = forwardPropagationDeep(X', weights, biases,keep_prob=1, 
                 hiddenActivationFunc="relu", outputActivationFunc="softmax");
# Compute cost
cost = computeCost(AL, Y',outputActivationFunc=outputActivationFunc,numClasses=layersDimensions(size(layersDimensions)(2)));
disp(cost);
# Perform backward prop
[gradsDA gradsDW gradsDB] = backwardPropagationDeep(AL, Y', activation_caches,forward_caches, droputMat, lambd=0, keep_prob=1, 
                                 hiddenActivationFunc="relu", outputActivationFunc="softmax",
                                 numClasses=layersDimensions(size(layersDimensions)(2)));

#Take transpose of last layer for Softmax                                
L=size(weights)(2);
gradsDW{L}= gradsDW{L}';
gradsDB{L}= gradsDB{L}';   
#Perform gradient check
difference= gradient_check_n(weights, biases, gradsDW,gradsDB, X, Y, epsilon = 1e-7,
                  outputActivationFunc="softmax",numClasses=layersDimensions(size(layersDimensions)(2)));
 1.0986
The implementation works perfectly  2.0021e-005
Gradients from back propagation
{
  [1,1] =
    -7.1590e-005  4.1375e-005
    -1.9494e-004  -5.2014e-005
    -1.4554e-004  5.1699e-005
  [1,2] =
    3.3129e-004  1.9806e-004  -1.5662e-005
    -4.9692e-004  -3.7756e-004  -8.2318e-005
    1.6562e-004  1.7950e-004  9.7980e-005
}
{
  [1,1] =
    -3.0856e-005
    -3.3321e-004
    -3.8197e-004
  [1,2] =
    1.2046e-006
    2.9259e-007
    -1.4972e-006
}
Gradients from gradient check
{
  [1,1] =
    -7.1586e-005  4.1377e-005
    -1.9494e-004  -5.2013e-005
    -1.4554e-004  5.1695e-005
    3.3129e-004  1.9806e-004  -1.5664e-005
    -4.9692e-004  -3.7756e-004  -8.2316e-005
    1.6562e-004  1.7950e-004  9.7979e-005
}
{
  [1,1] =
    -3.0852e-005
    -3.3321e-004
    -3.8197e-004
  [1,2] =
    1.1902e-006
    2.8200e-007
    -1.4644e-006
}

2.1 Tip for tuning hyperparameters

Deep Learning Networks come with a large number of hyper parameters which require tuning. The hyper parameters are

1. \alpha -learning rate
2. Number of layers
3. Number of hidden units
4. Number of iterations
5. Momentum – \beta – 0.9
6. RMSProp – \beta_{1} – 0.9
7. Adam – \beta_{1},\beta_{2} and \epsilon
8. learning rate decay
9. mini batch size
10. Initialization method – He, Xavier
11. Regularization

– Among the above the most critical is learning rate \alpha . Rather than just trying out random values, it may help to try out values on a logarithmic scale. So we could try out values -0.01,0.1,1.0,10 etc. If we find that the cost is between 0.01 and 0.1 we could use a technique similar to binary search or bisection, so we can try 0.01, 0.05. If we need to be bigger than 0.01 and 0.05 we could try 0.25  and then keep halving the distance etc.
– The performance of Momentum and RMSProp are very good and work well with values 0.9. Even with this, it is better to try out values of 1-\beta in the logarithmic range. So 1-\beta could 0.001,0.01,0.1 and hence \beta would be 0.999,0.99 or 0.9
– Increasing the number of hidden units or number of hidden layers need to be done gradually. I have noticed that increasing number of hidden layers heavily does not improve performance and sometimes degrades it.
– Sometimes, I tend to increase the number of iterations if I think I see a steady decrease in the cost for a certain learning rate
– It may also help to add learning rate decay if you see there is an oscillation while it decreases.
– Xavier and He initializations also help in a fast convergence and are worth trying out.

3.1 Final thoughts

As I come to a close in this Deep Learning Series from first principles in Python, R and Octave, I must admit that I learnt a lot in the process.

* Building a L-layer, vectorized Deep Learning Network in Python, R and Octave was extremely challenging but very rewarding
* One benefit of building vectorized versions in Python, R and Octave was that I was looking at each function that I was implementing thrice, and hence I was able to fix any bugs in any of the languages
* In addition since I built the generic L-Layer DL network with all the bells and whistles, layer by layer I further had an opportunity to look at all the functions in each successive post.
* Each language has its advantages and disadvantages. From the performance perspective I think Python is the best, followed by Octave and then R
* Interesting, I noticed that even if small bugs creep into your implementation, the DL network does learn and does generate a valid set of weights and biases, however this may not be an optimum solution. In one case of an inadvertent bug, I was not updating the weights in the final layer of the DL network. Yet, using all the other layers, the DL network was able to come with a reasonable solution (maybe like random dropout, remaining units can still learn the data!)
* Having said that, the Gradient Check method discussed and implemented in this post can be very useful in ironing out bugs.
Feel free to clone/download the code from Github at DeepLearning-Part8

Conclusion

These last couple of months when I was writing the posts and the also churning up the code in Python, R and Octave were  very hectic. There have been times when I found that implementations of some function to be extremely demanding and I almost felt like giving up. Other times, I have spent quite some time on an intractable DL network which would not respond to changes in hyper-parameters. All in all, it was a great learning experience. I would suggest that you start from my first post Deep Learning from first principles in Python, R and Octave-Part 1 and work your way up. Feel free to take the code apart and try out things. That is the only way you will learn.

Hope you had as much fun as I had. Stay tuned. I will be back!!!

Also see
1. My book ‘Practical Machine Learning with R and Python’ on Amazon
2. Revisiting crimes against women in India
3. Literacy in India – A deepR dive
4. Sixer – R package cricketr’s new Shiny avatar
5. Bend it like Bluemix, MongoDB using Auto-scale – Part 1!
6. Computer Vision: Ramblings on derivatives, histograms and contours
7. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
8. A closer look at “Robot Horse on a Trot” in Android

To see all post click Index of Posts

Deep Learning from first principles in Python, R and Octave – Part 7

Artificial Intelligence is the new electricity. – Prof Andrew Ng

Most of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. We know how to make the icing and the cherry, but we don’t know how to make the cake. We need to solve the unsupervised learning problem before we can even think of getting to true AI.  – Yann LeCun, March 14, 2016 (Facebook)

Introduction

In this post ‘Deep Learning from first principles with Python, R and Octave-Part 7’, I implement optimization methods used in Stochastic Gradient Descent (SGD) to speed up the convergence. Specifically I discuss and implement the following gradient descent optimization techniques

a.Vanilla Stochastic Gradient Descent
b.Learning rate decay
c. Momentum method
d. RMSProp
e. Adaptive Moment Estimation (Adam)

This post, further enhances my generic  L-Layer Deep Learning Network implementations in  vectorized Python, R and Octave to also include the Stochastic Gradient Descent optimization techniques. You can clone/download the code from Github at DeepLearning-Part7

You can view my video  presentation on Gradient Descent Optimization in Neural Networks 7

Incidentally, a good discussion of the various optimizations methods used in Stochastic Gradient Optimization techniques can be seen at Sebastian Ruder’s blog

Note: In the vectorized Python, R and Octave implementations below only a  1024 random training samples were used. This was to reduce the computation time. You are free to use the entire data set (60000 training data) for the computation.

This post is largely based of on Prof Andrew Ng’s Deep Learning Specialization.  All the above optimization techniques for Stochastic Gradient Descent are based on the technique of exponentially weighted average method. So for example if we had some time series data \theta_{1},\theta_{2},\theta_{3}... \theta_{t} then we we can represent the exponentially average value at time ‘t’ as a sequence of the the previous value v_{t-1} and \theta_{t} as shown below
v_{t} = \beta v_{t-1} + (1-\beta)\theta_{t}

Here v_{t} represent the average of the data set over \frac {1}{1-\beta}  By choosing different values of \beta, we can average over a larger or smaller number of the data points.
We can write the equations as follows
v_{t} = \beta v_{t-1} + (1-\beta)\theta_{t}
v_{t-1} = \beta v_{t-2} + (1-\beta)\theta_{t-1}
v_{t-2} = \beta v_{t-3} + (1-\beta)\theta_{t-2}
and
v_{t-k} = \beta v_{t-(k+1))} + (1-\beta)\theta_{t-k}
By substitution we have
v_{t} = (1-\beta)\theta_{t} + \beta v_{t-1}
v_{t} = (1-\beta)\theta_{t} + \beta ((1-\beta)\theta_{t-1}) + \beta v_{t-2}
v_{t} = (1-\beta)\theta_{t} + \beta ((1-\beta)\theta_{t-1}) + \beta ((1-\beta)\theta_{t-2}+ \beta v_{t-3} )

Hence it can be seen that the v_{t} is the weighted sum over the previous values \theta_{k}, which is an exponentially decaying function.

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

You may also like my companion book “Practical Machine Learning with R and Python- Machine Learning in stereo” available in Amazon in paperback($9.99) and Kindle($6.99) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.

1.1a. Stochastic Gradient Descent (Vanilla) – Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())

# Read the training data
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of 1024 random numbers.
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
# Set the layer dimensions  
layersDimensions=[784, 15,9,10] 
# Perform SGD with regular gradient descent
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",learningRate = 0.01 ,
                                   optimizer="gd",
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig1.png")

1.1b. Stochastic Gradient Descent (Vanilla) – R

source("mnist.R")
source("DLfunctions7.R")
#Load and read MNIST data
load_mnist() 
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
# Set layer dimensions
layersDimensions=c(784, 15,9, 10) 
# Perform SGD with regular gradient descent
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                            hiddenActivationFunc='tanh',
                            outputActivationFunc="softmax",
                            learningRate = 0.05,
                            optimizer="gd",
                            mini_batch_size = 512, 
                            num_epochs = 5000, 
                            print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs no of epochs") + xlab("No of epochss") + ylab("Cost")

1.1c. Stochastic Gradient Descent (Vanilla) – Octave

source("DL7functions.m")
#Load and read MNIST
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 1024
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
# Perform SGD with regular gradient descent
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
 hiddenActivationFunc='relu', 
 outputActivationFunc="softmax",
 learningRate = 0.005,
 lrDecay=true, 
 decayRate=1,
 lambd=0,
 keep_prob=1,
 optimizer="gd",
 beta=0.9,
 beta1=0.9,
 beta2=0.999,
 epsilon=10^-8,
 mini_batch_size = 512, 
 num_epochs = 5000);

plotCostVsEpochs(5000,costs);

2.1. Stochastic Gradient Descent with Learning rate decay

Since in Stochastic Gradient Descent,with  each epoch, we use slight different samples, the gradient descent algorithm, oscillates across the ravines and wanders around the minima, when a fixed learning rate is used. In this technique of ‘learning rate decay’ the learning rate is slowly decreased with the number of epochs and becomes smaller and smaller, so that gradient descent can take smaller steps towards the minima.

There are several techniques employed in learning rate decay

a) Exponential decay: \alpha = decayRate^{epochNum} *\alpha_{0}
b) 1/t decay : \alpha = \frac{\alpha_{0}}{1 + decayRate*epochNum}
c) \alpha = \frac {decayRate}{\sqrt(epochNum)}*\alpha_{0}

In my implementation I have used the ‘exponential decay’. The code snippet for Python is shown below

if lrDecay == True:
   learningRate = np.power(decayRate,(num_epochs/1000)) * learningRate

2.1a. Stochastic Gradient Descent with Learning rate decay – Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())

# Read the MNIST data
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
# Set layer dimensions
layersDimensions=[784, 15,9,10] 
# Perform SGD with learning rate decay
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",
                                   learningRate = 0.01 , lrDecay=True, decayRate=0.9999,
                                   optimizer="gd",
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig2.png")

2.1b. Stochastic Gradient Descent with Learning rate decay – R

source("mnist.R")
source("DLfunctions7.R")
# Read and load MNIST
load_mnist()
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
# Set layer dimensions
layersDimensions=c(784, 15,9, 10) 
# Perform SGD with Learning rate decay
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                                  hiddenActivationFunc='tanh',
                                  outputActivationFunc="softmax",
                                  learningRate = 0.05,
                                  lrDecay=TRUE,
                                  decayRate=0.9999,
                                  optimizer="gd",
                                  mini_batch_size = 512, 
                                  num_epochs = 5000, 
                                  print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")

2.1c. Stochastic Gradient Descent with Learning rate decay – Octave

source("DL7functions.m")
#Load and read MNIST
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 1024
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
# Perform SGD with regular Learning rate decay
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
 hiddenActivationFunc='relu', 
 outputActivationFunc="softmax",
 learningRate = 0.01,
 lrDecay=true, 
 decayRate=0.999,
 lambd=0,
 keep_prob=1,
 optimizer="gd",
 beta=0.9,
 beta1=0.9,
 beta2=0.999,
 epsilon=10^-8,
 mini_batch_size = 512, 
 num_epochs = 5000);
plotCostVsEpochs(5000,costs)

3.1. Stochastic Gradient Descent with Momentum

Stochastic Gradient Descent with Momentum uses the exponentially weighted average method discusses above and more generally moves faster into the ravine than across it. The equations are
v_{dW}^l = \beta v_{dW}^l + (1-\beta)dW^{l}
v_{db}^l = \beta v_{db}^l + (1-\beta)db^{l}
W^{l} = W^{l} - \alpha v_{dW}^l
b^{l} = b^{l} - \alpha v_{db}^l where
v_{dW} and v_{db} are the momentum terms which are exponentially weighted with the corresponding gradients ‘dW’ and ‘db’ at the corresponding layer ‘l’ The code snippet for Stochastic Gradient Descent with momentum in R is shown below

# Perform Gradient Descent with momentum
# Input : Weights and biases
#       : beta
#       : gradients
#       : learning rate
#       : outputActivationFunc - Activation function at hidden layer sigmoid/softmax
#output : Updated weights after 1 iteration
gradientDescentWithMomentum  <- function(parameters, gradients,v, beta, learningRate,outputActivationFunc="sigmoid"){

    L = length(parameters)/2 # number of layers in the neural network    
    # Update rule for each parameter. Use a for loop.
    for(l in 1:(L-1)){
        # Compute velocities
        # v['dWk'] = beta *v['dWk'] + (1-beta)*dWk
        v[[paste("dW",l, sep="")]] = beta*v[[paste("dW",l, sep="")]] + 
                   (1-beta) * gradients[[paste('dW',l,sep="")]]
        v[[paste("db",l, sep="")]] = beta*v[[paste("db",l, sep="")]] + 
            (1-beta) * gradients[[paste('db',l,sep="")]]
        
        parameters[[paste("W",l,sep="")]] = parameters[[paste("W",l,sep="")]] -
            learningRate* v[[paste("dW",l, sep="")]] 
        parameters[[paste("b",l,sep="")]] = parameters[[paste("b",l,sep="")]] -
            learningRate* v[[paste("db",l, sep="")]] 
    }    
    # Compute for the Lth layer
    if(outputActivationFunc=="sigmoid"){
        v[[paste("dW",L, sep="")]] = beta*v[[paste("dW",L, sep="")]] + 
            (1-beta) * gradients[[paste('dW',L,sep="")]]
        v[[paste("db",L, sep="")]] = beta*v[[paste("db",L, sep="")]] + 
            (1-beta) * gradients[[paste('db',L,sep="")]]
        
        parameters[[paste("W",L,sep="")]] = parameters[[paste("W",L,sep="")]] -
            learningRate* v[[paste("dW",l, sep="")]]  
        parameters[[paste("b",L,sep="")]] = parameters[[paste("b",L,sep="")]] -
            learningRate* v[[paste("db",l, sep="")]]
        
    }else if (outputActivationFunc=="softmax"){
        v[[paste("dW",L, sep="")]] = beta*v[[paste("dW",L, sep="")]] + 
            (1-beta) * t(gradients[[paste('dW',L,sep="")]])
        v[[paste("db",L, sep="")]] = beta*v[[paste("db",L, sep="")]] + 
            (1-beta) * t(gradients[[paste('db',L,sep="")]])       
        parameters[[paste("W",L,sep="")]] = parameters[[paste("W",L,sep="")]] -
            learningRate* t(gradients[[paste("dW",L,sep="")]])
        parameters[[paste("b",L,sep="")]] = parameters[[paste("b",L,sep="")]] -
            learningRate* t(gradients[[paste("db",L,sep="")]])
    }
    return(parameters)
}

3.1a. Stochastic Gradient Descent with Momentum- Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
# Read and load data
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
layersDimensions=[784, 15,9,10] 
# Perform SGD with momentum
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",learningRate = 0.01 ,
                                   optimizer="momentum", beta=0.9,
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig3.png")

3.1b. Stochastic Gradient Descent with Momentum- R

source("mnist.R")
source("DLfunctions7.R")
load_mnist()
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
layersDimensions=c(784, 15,9, 10) 
# Perform SGD with momentum
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                                  hiddenActivationFunc='tanh',
                                  outputActivationFunc="softmax",
                                  learningRate = 0.05,
                                  optimizer="momentum",
                                  beta=0.9,
                                  mini_batch_size = 512, 
                                  num_epochs = 5000, 
                                  print_cost = True)

#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")

3.1c. Stochastic Gradient Descent with Momentum- Octave

source("DL7functions.m")
#Load and read MNIST
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 60K
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
# Perform SGD with Momentum
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
 hiddenActivationFunc='relu', 
 outputActivationFunc="softmax",
 learningRate = 0.01,
 lrDecay=false, 
 decayRate=1,
 lambd=0,
 keep_prob=1,
 optimizer="momentum",
 beta=0.9,
 beta1=0.9,
 beta2=0.999,
 epsilon=10^-8,
 mini_batch_size = 512, 
 num_epochs = 5000);

plotCostVsEpochs(5000,costs)

4.1. Stochastic Gradient Descent with RMSProp

Stochastic Gradient Descent with RMSProp tries to move faster towards the minima while dampening the oscillations across the ravine.
The equations are

s_{dW}^l = \beta_{1} s_{dW}^l + (1-\beta_{1})(dW^{l})^{2}
s_{db}^l = \beta_{1} s_{db}^l + (1-\beta_{1})(db^{l})^2
W^{l} = W^{l} - \frac {\alpha s_{dW}^l}{\sqrt (s_{dW}^l + \epsilon) }
b^{l} = b^{l} - \frac {\alpha s_{db}^l}{\sqrt (s_{db}^l + \epsilon) }
where s_{dW} and s_{db} are the RMSProp terms which are exponentially weighted with the corresponding gradients ‘dW’ and ‘db’ at the corresponding layer ‘l’

The code snippet in Octave is shown below

# Update parameters with RMSProp
# Input : parameters
#       : gradients
#       : s
#       : beta
#       : learningRate
#       : 
#output : Updated parameters RMSProp
function [weights biases] = gradientDescentWithRMSProp(weights, biases,gradsDW,gradsDB, sdW, sdB, beta1, epsilon, learningRate,outputActivationFunc="sigmoid")
    L = size(weights)(2); # number of layers in the neural network
    # Update rule for each parameter. 
    for l=1:(L-1)
        sdW{l} =  beta1*sdW{l} + (1 -beta1) * gradsDW{l} .* gradsDW{l};
        sdB{l} =  beta1*sdB{l} + (1 -beta1) * gradsDB{l} .* gradsDB{l};
        weights{l} = weights{l} - learningRate* gradsDW{l} ./ sqrt(sdW{l} + epsilon); 
        biases{l} = biases{l} -  learningRate* gradsDB{l} ./ sqrt(sdB{l} + epsilon);
    endfor
  
    if (strcmp(outputActivationFunc,"sigmoid"))
        sdW{L} =  beta1*sdW{L} + (1 -beta1) * gradsDW{L} .* gradsDW{L};
        sdB{L} =  beta1*sdB{L} + (1 -beta1) * gradsDB{L} .* gradsDB{L};
        weights{L} = weights{L} -learningRate* gradsDW{L} ./ sqrt(sdW{L} +epsilon); 
        biases{L} = biases{L} -learningRate* gradsDB{L} ./ sqrt(sdB{L} + epsilon);
     elseif (strcmp(outputActivationFunc,"softmax"))
        sdW{L} =  beta1*sdW{L} + (1 -beta1) * gradsDW{L}' .* gradsDW{L}';
        sdB{L} =  beta1*sdB{L} + (1 -beta1) * gradsDB{L}' .* gradsDB{L}';
        weights{L} = weights{L} -learningRate* gradsDW{L}' ./ sqrt(sdW{L} +epsilon); 
        biases{L} = biases{L} -learningRate* gradsDB{L}' ./ sqrt(sdB{L} + epsilon);
     endif   
end

4.1a. Stochastic Gradient Descent with RMSProp – Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())

# Read and load MNIST
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

print("X1=",X1.shape)
print("y1=",Y1.shape)

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
  
layersDimensions=[784, 15,9,10] 
# Use SGD with RMSProp
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",learningRate = 0.01 ,
                                   optimizer="rmsprop", beta1=0.7, epsilon=1e-8,
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig4.png")

4.1b. Stochastic Gradient Descent with RMSProp – R

source("mnist.R")
source("DLfunctions7.R")
load_mnist()
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
layersDimensions=c(784, 15,9, 10) 
#Perform SGD with RMSProp
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                                  hiddenActivationFunc='tanh',
                                  outputActivationFunc="softmax",
                                  learningRate = 0.001,
                                  optimizer="rmsprop",
                                  beta1=0.9,
                                  epsilon=10^-8,
                                  mini_batch_size = 512, 
                                  num_epochs = 5000 , 
                                  print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")

4.1c. Stochastic Gradient Descent with RMSProp – Octave

source("DL7functions.m")
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 1024
permutation = randperm(1024);

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
#Perform SGD with RMSProp
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
 hiddenActivationFunc='relu', 
 outputActivationFunc="softmax",
 learningRate = 0.005,
 lrDecay=false, 
 decayRate=1,
 lambd=0,
 keep_prob=1,
 optimizer="rmsprop",
 beta=0.9,
 beta1=0.9,
 beta2=0.999,
 epsilon=1,
 mini_batch_size = 512, 
 num_epochs = 5000);
plotCostVsEpochs(5000,costs)

5.1. Stochastic Gradient Descent with Adam

Adaptive Moment Estimate is a combination of the momentum (1st moment) and RMSProp(2nd moment). The equations for Adam are below
v_{dW}^l = \beta_{1} v_{dW}^l + (1-\beta_{1})dW^{l}
v_{db}^l = \beta_{1} v_{db}^l + (1-\beta_{1})db^{l}
The bias corrections for the 1st moment
vCorrected_{dW}^l= \frac {v_{dW}^l}{1 - \beta_{1}^{t}}
vCorrected_{db}^l= \frac {v_{db}^l}{1 - \beta_{1}^{t}}

Similarly the moving average for the 2nd moment- RMSProp
s_{dW}^l = \beta_{2} s_{dW}^l + (1-\beta_{2})(dW^{l})^2
s_{db}^l = \beta_{2} s_{db}^l + (1-\beta_{2})(db^{l})^2
The bias corrections for the 2nd moment
sCorrected_{dW}^l= \frac {s_{dW}^l}{1 - \beta_{2}^{t}}
sCorrected_{db}^l= \frac {s_{db}^l}{1 - \beta_{2}^{t}}

The Adam Gradient Descent is given by
W^{l} = W^{l} - \frac {\alpha vCorrected_{dW}^l}{\sqrt (s_{dW}^l + \epsilon) }
b^{l} = b^{l} - \frac {\alpha vCorrected_{db}^l}{\sqrt (s_{db}^l + \epsilon) }
The code snippet of Adam in R is included below

# Perform Gradient Descent with Adam
# Input : Weights and biases
#       : beta1
#       : epsilon
#       : gradients
#       : learning rate
#       : outputActivationFunc - Activation function at hidden layer sigmoid/softmax
#output : Updated weights after 1 iteration
gradientDescentWithAdam  <- function(parameters, gradients,v, s, t, 
                        beta1=0.9, beta2=0.999, epsilon=10^-8, learningRate=0.1,outputActivationFunc="sigmoid"){
    
    L = length(parameters)/2 # number of layers in the neural network
    v_corrected <- list()
    s_corrected <- list()
    # Update rule for each parameter. Use a for loop.
    for(l in 1:(L-1)){
        # v['dWk'] = beta *v['dWk'] + (1-beta)*dWk
        v[[paste("dW",l, sep="")]] = beta1*v[[paste("dW",l, sep="")]] + 
            (1-beta1) * gradients[[paste('dW',l,sep="")]]
        v[[paste("db",l, sep="")]] = beta1*v[[paste("db",l, sep="")]] + 
            (1-beta1) * gradients[[paste('db',l,sep="")]]
        
        
        # Compute bias-corrected first moment estimate. 
        v_corrected[[paste("dW",l, sep="")]] = v[[paste("dW",l, sep="")]]/(1-beta1^t)
        v_corrected[[paste("db",l, sep="")]] = v[[paste("db",l, sep="")]]/(1-beta1^t)
               
        # Element wise multiply of gradients
        s[[paste("dW",l, sep="")]] = beta2*s[[paste("dW",l, sep="")]] + 
            (1-beta2) * gradients[[paste('dW',l,sep="")]] * gradients[[paste('dW',l,sep="")]]
        s[[paste("db",l, sep="")]] = beta2*s[[paste("db",l, sep="")]] + 
            (1-beta2) * gradients[[paste('db',l,sep="")]] * gradients[[paste('db',l,sep="")]]
        
        # Compute bias-corrected second moment estimate. 
        s_corrected[[paste("dW",l, sep="")]] = s[[paste("dW",l, sep="")]]/(1-beta2^t)
        s_corrected[[paste("db",l, sep="")]] = s[[paste("db",l, sep="")]]/(1-beta2^t)
        
        # Update parameters. 
        d1=sqrt(s_corrected[[paste("dW",l, sep="")]]+epsilon)
        d2=sqrt(s_corrected[[paste("db",l, sep="")]]+epsilon)        
                
        parameters[[paste("W",l,sep="")]] = parameters[[paste("W",l,sep="")]] -
            learningRate * v_corrected[[paste("dW",l, sep="")]]/d1
        parameters[[paste("b",l,sep="")]] = parameters[[paste("b",l,sep="")]] -
            learningRate*v_corrected[[paste("db",l, sep="")]]/d2
    }    
    # Compute for the Lth layer
    if(outputActivationFunc=="sigmoid"){
        v[[paste("dW",L, sep="")]] = beta1*v[[paste("dW",L, sep="")]] + 
            (1-beta1) * gradients[[paste('dW',L,sep="")]]
        v[[paste("db",L, sep="")]] = beta1*v[[paste("db",L, sep="")]] + 
            (1-beta1) * gradients[[paste('db',L,sep="")]]
                
        # Compute bias-corrected first moment estimate. 
        v_corrected[[paste("dW",L, sep="")]] = v[[paste("dW",L, sep="")]]/(1-beta1^t)
        v_corrected[[paste("db",L, sep="")]] = v[[paste("db",L, sep="")]]/(1-beta1^t)
                
        # Element wise multiply of gradients
        s[[paste("dW",L, sep="")]] = beta2*s[[paste("dW",L, sep="")]] + 
            (1-beta2) * gradients[[paste('dW',L,sep="")]] * gradients[[paste('dW',L,sep="")]]
        s[[paste("db",L, sep="")]] = beta2*s[[paste("db",L, sep="")]] + 
            (1-beta2) * gradients[[paste('db',L,sep="")]] * gradients[[paste('db',L,sep="")]]
        
        # Compute bias-corrected second moment estimate. 
        s_corrected[[paste("dW",L, sep="")]] = s[[paste("dW",L, sep="")]]/(1-beta2^t)
        s_corrected[[paste("db",L, sep="")]] = s[[paste("db",L, sep="")]]/(1-beta2^t)
        
        # Update parameters. 
        d1=sqrt(s_corrected[[paste("dW",L, sep="")]]+epsilon)
        d2=sqrt(s_corrected[[paste("db",L, sep="")]]+epsilon)  
        
        parameters[[paste("W",L,sep="")]] = parameters[[paste("W",L,sep="")]] -
            learningRate * v_corrected[[paste("dW",L, sep="")]]/d1
        parameters[[paste("b",L,sep="")]] = parameters[[paste("b",L,sep="")]] -
            learningRate*v_corrected[[paste("db",L, sep="")]]/d2
        
    }else if (outputActivationFunc=="softmax"){
        v[[paste("dW",L, sep="")]] = beta1*v[[paste("dW",L, sep="")]] + 
            (1-beta1) * t(gradients[[paste('dW',L,sep="")]])
        v[[paste("db",L, sep="")]] = beta1*v[[paste("db",L, sep="")]] + 
            (1-beta1) * t(gradients[[paste('db',L,sep="")]])
                
        # Compute bias-corrected first moment estimate. 
        v_corrected[[paste("dW",L, sep="")]] = v[[paste("dW",L, sep="")]]/(1-beta1^t)
        v_corrected[[paste("db",L, sep="")]] = v[[paste("db",L, sep="")]]/(1-beta1^t)        
        
        # Element wise multiply of gradients
        s[[paste("dW",L, sep="")]] = beta2*s[[paste("dW",L, sep="")]] + 
            (1-beta2) * t(gradients[[paste('dW',L,sep="")]]) * t(gradients[[paste('dW',L,sep="")]])
        s[[paste("db",L, sep="")]] = beta2*s[[paste("db",L, sep="")]] + 
            (1-beta2) * t(gradients[[paste('db',L,sep="")]]) * t(gradients[[paste('db',L,sep="")]])
        
        # Compute bias-corrected second moment estimate. 
        s_corrected[[paste("dW",L, sep="")]] = s[[paste("dW",L, sep="")]]/(1-beta2^t)
        s_corrected[[paste("db",L, sep="")]] = s[[paste("db",L, sep="")]]/(1-beta2^t)
        
        # Update parameters. 
        d1=sqrt(s_corrected[[paste("dW",L, sep="")]]+epsilon)
        d2=sqrt(s_corrected[[paste("db",L, sep="")]]+epsilon) 
        
        parameters[[paste("W",L,sep="")]] = parameters[[paste("W",L,sep="")]] -
            learningRate * v_corrected[[paste("dW",L, sep="")]]/d1
        parameters[[paste("b",L,sep="")]] = parameters[[paste("b",L,sep="")]] -
            learningRate*v_corrected[[paste("db",L, sep="")]]/d2
    }
    return(parameters)
}

5.1a. Stochastic Gradient Descent with Adam – Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
print(len(training))
#for i in range(len(training)):
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T


# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
layersDimensions=[784, 15,9,10] 
#Perform SGD with Adam optimization
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",learningRate = 0.01 ,
                                   optimizer="adam", beta1=0.9, beta2=0.9, epsilon = 1e-8,
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True, figure="fig5.png")

5.1b. Stochastic Gradient Descent with Adam – R

source("mnist.R")
source("DLfunctions7.R")
load_mnist()
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
layersDimensions=c(784, 15,9, 10) 
#Perform SGD with Adam
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                                  hiddenActivationFunc='tanh',
                                  outputActivationFunc="softmax",
                                  learningRate = 0.005,
                                  optimizer="adam",
                                  beta1=0.7,
                                  beta2=0.9,
                                  epsilon=10^-8,
                                  mini_batch_size = 512, 
                                  num_epochs = 5000 , 
                                  print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")

5.1c. Stochastic Gradient Descent with Adam – Octave

source("DL7functions.m")
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 1024
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);
# Set layer dimensions
layersDimensions=[784, 15, 9, 10];

# Note the high value for epsilon. 
#Otherwise GD with Adam does not seem to converge   
# Perform SGD with Adam         
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
                       hiddenActivationFunc='relu', 
                       outputActivationFunc="softmax",
                       learningRate = 0.1,
                       lrDecay=false, 
                       decayRate=1,
                       lambd=0,
                       keep_prob=1,
                       optimizer="adam",
                       beta=0.9,
                       beta1=0.9,
                       beta2=0.9,
                       epsilon=100,
                       mini_batch_size = 512, 
                       num_epochs = 5000);
plotCostVsEpochs(5000,costs)

Conclusion: In this post I discuss and implement several Stochastic Gradient Descent optimization methods. The implementation of these methods enhance my already existing generic L-Layer Deep Learning Network implementation in vectorized Python, R and Octave, which I had discussed in the previous post in this series on Deep Learning from first principles in Python, R and Octave. Check it out, if you haven’t already. As already mentioned the code for this post can be cloned/forked from Github at DeepLearning-Part7

Watch this space! I’ll be back!

Also see
1.My book ‘Practical Machine Learning with R and Python’ on Amazon
2. Deep Learning from first principles in Python, R and Octave – Part 3
3. Experiments with deblurring using OpenCV
3. Design Principles of Scalable, Distributed Systems
4. Natural language processing: What would Shakespeare say?
5. yorkr crashes the IPL party! – Part 3!
6. cricketr flexes new muscles: The final analysis

To see all post click Index of posts

Deep Learning from first principles in Python, R and Octave – Part 5

Introduction

a. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
b. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
c. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

      Isaac Asimov's Three Laws of Robotics 

Any sufficiently advanced technology is indistinguishable from magic.

      Arthur C Clarke.   

In this 5th part on Deep Learning from first Principles in Python, R and Octave, I solve the MNIST data set of handwritten digits (shown below), from the basics. To do this, I construct a L-Layer, vectorized Deep Learning implementation in Python, R and Octave from scratch and classify the  MNIST data set. The MNIST training data set  contains 60000 handwritten digits from 0-9, and a test set of 10000 digits. MNIST, is a popular dataset for running Deep Learning tests, and has been rightfully termed as the ‘drosophila’ of Deep Learning, by none other than the venerable Prof Geoffrey Hinton.

The ‘Deep Learning from first principles in Python, R and Octave’ series, so far included  Part 1 , where I had implemented logistic regression as a simple Neural Network. Part 2 implemented the most elementary neural network with 1 hidden layer, but  with any number of activation units in that layer, and a sigmoid activation at the output layer.

This post, ‘Deep Learning from first principles in Python, R and Octave – Part 5’ largely builds upon Part3. in which I implemented a multi-layer Deep Learning network, with an arbitrary number of hidden layers and activation units per hidden layer and with the output layer was based on the sigmoid unit, for binary classification. In Part 4, I derive the Jacobian of a Softmax, the Cross entropy loss and the gradient equations for a multi-class Softmax classifier. I also  implement a simple Neural Network using Softmax classifications in Python, R and Octave.

In this post I combine Part 3 and Part 4 to to build a L-layer Deep Learning network, with arbitrary number of hidden layers and hidden units, which can do both binary (sigmoid) and multi-class (softmax) classification.

Note: A detailed discussion of the derivation for multi-class clasification can be seen in my video presentation Neural Networks 5

The generic, vectorized L-Layer Deep Learning Network implementations in Python, R and Octave can be cloned/downloaded from GitHub at DeepLearning-Part5. This implementation allows for arbitrary number of hidden layers and hidden layer units. The activation function at the hidden layers can be one of sigmoid, relu and tanh (will be adding leaky relu soon). The output activation can be used for binary classification with the ‘sigmoid’, or multi-class classification with ‘softmax’. Feel free to download and play around with the code!

I thought the exercise of combining the two parts(Part 3, & Part 4)  would be a breeze. But it was anything but. Incorporating a Softmax classifier into the generic L-Layer Deep Learning model was a challenge. Moreover I found that I could not use the gradient descent on 60,000 training samples as my laptop ran out of memory. So I had to implement Stochastic Gradient Descent (SGD) for Python, R and Octave. In addition, I had to also implement the numerically stable version of Softmax, as the softmax and its derivative would result in NaNs.

Numerically stable Softmax

The Softmax function S_{j} =\frac{e^{Z_{j}}}{\sum_{i}^{k}e^{Z_{i}}} can be numerically unstable because of the division of large exponentials.  To handle this problem we have to implement stable Softmax function as below

S_{j} =\frac{e^{Z_{j}}}{\sum_{i}^{k}e^{Z_{i}}}
S_{j} =\frac{e^{Z_{j}}}{\sum_{i}^{k}e^{Z_{i}}} = \frac{Ce^{Z_{j}}}{C\sum_{i}^{k}e^{Z_{i}}} = \frac{e^{Z_{j}+log(C)}}{\sum_{i}^{k}e^{Z_{i}+log(C)}}
Therefore S_{j}  = \frac{e^{Z_{j}+ D}}{\sum_{i}^{k}e^{Z_{i}+ D}}
Here ‘D’ can be anything. A common choice is
D=-max(Z_{1},Z_{2},... Z_{k})

Here is the stable Softmax implementation in Python

# A numerically stable Softmax implementation
def stableSoftmax(Z):  
    #Compute the softmax of vector x in a numerically stable way.
    shiftZ = Z.T - np.max(Z.T,axis=1).reshape(-1,1)
    exp_scores = np.exp(shiftZ)
    # normalize them for each example
    A = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) 
    cache=Z
    return A,cache

While trying to create a L-Layer generic Deep Learning network in the 3 languages, I found it useful to ensure that the model executed correctly on smaller datasets.  You can run into numerous problems while setting up the matrices, which becomes extremely difficult to debug. So in this post, I run the model on 2 smaller data for sets used in my earlier posts(Part 3 & Part4) , in each of the languages, before running the generic model on MNIST.

Here is a fair warning. if you think you can dive directly into Deep Learning, with just some basic knowledge of Machine Learning, you are bound to run into serious issues. Moreover, your knowledge will be incomplete. It is essential that you have a good grasp of Machine and Statistical Learning, the different algorithms, the measures and metrics for selecting the models etc.It would help to be conversant with all the ML models, ML concepts, validation techniques, classification measures  etc. Check out the internet/books for background.

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

You may also like my companion book “Practical Machine Learning with R and Python:Second Edition- Machine Learning in stereo” available in Amazon in paperback($10.99) and Kindle($7.99/Rs449) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.

1. Random dataset with Sigmoid activation – Python

This random data with 9 clusters, was used in my post Deep Learning from first principles in Python, R and Octave – Part 3 , and was used to test the complete L-layer Deep Learning network with Sigmoid activation.

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import make_classification, make_blobs
exec(open("DLfunctions51.py").read()) # Cannot import in Rmd.
# Create a random data set with 9 centeres
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,cluster_std = 1.3, random_state =4)
                       
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Set the dimensions of L -layer DL network
layersDimensions = [2, 9, 9,1] #  4-layer model
# Execute DL network with hidden activation=relu and sigmoid output function
parameters = L_Layer_DeepModel(X2, Y2, layersDimensions, hiddenActivationFunc='relu', outputActivationFunc="sigmoid",learningRate = 0.3,num_iterations = 2500, print_cost = True)

2. Spiral dataset with Softmax activation – Python

The Spiral data was used in my post Deep Learning from first principles in Python, R and Octave – Part 4 and was used to test the complete L-layer Deep Learning network with multi-class Softmax activation at the output layer

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import make_classification, make_blobs
exec(open("DLfunctions51.py").read())

# Create an input data set - Taken from CS231n Convolutional Neural networks
# http://cs231n.github.io/neural-networks-case-study/
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j

X1=X.T
Y1=y.reshape(-1,1).T
numHidden=100 # No of hidden units in hidden layer
numFeats= 2 # dimensionality
numOutput = 3 # number of classes
# Set the dimensions of the layers
layersDimensions=[numFeats,numHidden,numOutput]
parameters = L_Layer_DeepModel(X1, Y1, layersDimensions, hiddenActivationFunc='relu', outputActivationFunc="softmax",learningRate = 0.6,num_iterations = 9000, print_cost = True)
## Cost after iteration 0: 1.098759
## Cost after iteration 1000: 0.112666
## Cost after iteration 2000: 0.044351
## Cost after iteration 3000: 0.027491
## Cost after iteration 4000: 0.021898
## Cost after iteration 5000: 0.019181
## Cost after iteration 6000: 0.017832
## Cost after iteration 7000: 0.017452
## Cost after iteration 8000: 0.017161

3. MNIST dataset with Softmax activation – Python

In the code below, I execute Stochastic Gradient Descent on the MNIST training data of 60000. I used a mini-batch size of 1000. Python takes about 40 minutes to crunch the data. In addition I also compute the Confusion Matrix and other metrics like Accuracy, Precision and Recall for the MNIST data set. I get an accuracy of 0.93 on the MNIST test set. This accuracy can be improved by choosing more hidden layers or more hidden units and possibly also tweaking the learning rate and the number of epochs.

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import math
from sklearn.datasets import make_classification, make_blobs
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
exec(open("DLfunctions51.py").read())
exec(open("load_mnist.py").read())
# Read the MNIST training and test sets
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
# Create labels and pixel arrays
lbls=[]
pxls=[]
print(len(training))
#for i in range(len(training)):
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T
# Set the dimensions of the layers. The MNIST data is 28x28 pixels= 784
# Hence input layer is 784. For the 10 digits the Softmax classifier
# has to handle 10 outputs
layersDimensions=[784, 15,9,10] # Works very well,lr=0.01,mini_batch =1000, total=20000
np.random.seed(1)
costs = []  
# Run Stochastic Gradient Descent with Learning Rate=0.01, mini batch size=1000
# number of epochs=3000
parameters = L_Layer_DeepModel_SGD(X1, Y1, layersDimensions, hiddenActivationFunc='relu', outputActivationFunc="softmax",learningRate = 0.01 ,mini_batch_size =1000, num_epochs = 3000, print_cost = True)

# Compute the Confusion Matrix on Training set
# Compute the training accuracy, precision and recall
proba=predict_proba(parameters, X1,outputActivationFunc="softmax")
#A2, cache = forwardPropagationDeep(X1, parameters)
#proba=np.argmax(A2, axis=0).reshape(-1,1)
a=confusion_matrix(Y1.T,proba)
print(a)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Y1.T, proba)))
print('Precision: {:.2f}'.format(precision_score(Y1.T, proba,average="micro")))
print('Recall: {:.2f}'.format(recall_score(Y1.T, proba,average="micro")))

# Read the test data
lbls=[]
pxls=[]
print(len(test))
for i in range(10000):
       l,p=test[i]
       lbls.append(l)
       pxls.append(p)
testLabels= np.array(lbls)
testPixels=np.array(pxls)       
ytest=testLabels.reshape(-1,1)
Xtest=testPixels.reshape(testPixels.shape[0],-1)
X1test=Xtest.T
Y1test=ytest.T

# Compute the Confusion Matrix on Test set
# Compute the test accuracy, precision and recall
probaTest=predict_proba(parameters, X1test,outputActivationFunc="softmax")
#A2, cache = forwardPropagationDeep(X1, parameters)
#proba=np.argmax(A2, axis=0).reshape(-1,1)
a=confusion_matrix(Y1test.T,probaTest)
print(a)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Y1test.T, probaTest)))
print('Precision: {:.2f}'.format(precision_score(Y1test.T, probaTest,average="micro")))
print('Recall: {:.2f}'.format(recall_score(Y1test.T, probaTest,average="micro")))
##1.  Confusion Matrix of Training set
       0     1    2    3    4    5    6    7    8    9
## [[5854    0   19    2   10    7    0    1   24    6]
##  [   1 6659   30   10    5    3    0   14   20    0]
##  [  20   24 5805   18    6   11    2   32   37    3]
##  [   5    4  175 5783    1   27    1   58   60   17]
##  [   1   21    9    0 5780    0    5    2   12   12]
##  [  29    9   21  224    6 4824   18   17  245   28]
##  [   5    4   22    1   32   12 5799    0   43    0]
##  [   3   13  148  154   18    3    0 5883    4   39]
##  [  11   34   30   21   13   16    4    7 5703   12]
##  [  10    4    1   32  135   14    1   92  134 5526]]

##2. Accuracy, Precision, Recall of  Training set
## Accuracy: 0.96
## Precision: 0.96
## Recall: 0.96

##3. Confusion Matrix of Test set
       0     1    2    3    4    5    6    7    8    9
## [[ 954    1    8    0    3    3    2    4    4    1]
##  [   0 1107    6    5    0    0    1    2   14    0]
##  [  11    7  957   10    5    0    5   20   16    1]
##  [   2    3   37  925    3   13    0    8   18    1]
##  [   2    6    1    1  944    0    7    3    4   14]
##  [  12    5    4   45    2  740   24    8   42   10]
##  [   8    4    4    2   16    9  903    0   12    0]
##  [   4   10   27   18    5    1    0  940    1   22]
##  [  11   13    6   13    9   10    7    2  900    3]
##  [   8    5    1    7   50    7    0   20   29  882]]
##4. Accuracy, Precision, Recall of  Training set
## Accuracy: 0.93
## Precision: 0.93
## Recall: 0.93

4. Random dataset with Sigmoid activation – R code

This is the random data set used in the Python code above which was saved as a CSV. The code is used to test a L -Layer DL network with Sigmoid Activation in R.

source("DLfunctions5.R")
# Read the random data set
z <- as.matrix(read.csv("data.csv",header=FALSE)) 
x <- z[,1:2]
y <- z[,3]
X <- t(x)
Y <- t(y)
# Set the dimensions of the  layer
layersDimensions = c(2, 9, 9,1)

# Run Gradient Descent on the data set with relu hidden unit activation 
# sigmoid activation unit in the output layer
retvals = L_Layer_DeepModel(X, Y, layersDimensions,
                            hiddenActivationFunc='relu',
                            outputActivationFunc="sigmoid",
                            learningRate = 0.3,
                            numIterations = 5000, 
                            print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvals$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs iterations") + xlab("Iterations") + ylab("Loss")

5. Spiral dataset with Softmax activation – R

The spiral data set used in the Python code above, is reused to test  multi-class classification with Softmax.

source("DLfunctions5.R")
Z <- as.matrix(read.csv("spiral.csv",header=FALSE)) 

# Setup the data
X <- Z[,1:2]
y <- Z[,3]
X <- t(X)
Y <- t(y)

# Initialize number of features, number of hidden units in hidden layer and
# number of classes
numFeats<-2 # No features
numHidden<-100 # No of hidden units
numOutput<-3 # No of classes

# Set the layer dimensions
layersDimensions = c(numFeats,numHidden,numOutput)

# Perform gradient descent with relu activation unit for hidden layer
# and softmax activation in the output
retvals = L_Layer_DeepModel(X, Y, layersDimensions,
                            hiddenActivationFunc='relu',
                            outputActivationFunc="softmax",
                            learningRate = 0.5,
                            numIterations = 9000, 
                            print_cost = True)
#Plot cost vs iterations
iterations <- seq(0,9000,1000)
costs=retvals$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs iterations") + xlab("Iterations") + ylab("Costs")

6. MNIST dataset with Softmax activation – R

The code below executes a L – Layer Deep Learning network with Softmax output activation, to classify the 10 handwritten digits from MNIST with Stochastic Gradient Descent. The entire 60000 data set was used to train the data. R takes almost 8 hours to process this data set with a mini-batch size of 1000.  The use of ‘for’ loops is limited to iterating through epochs, mini batches and for creating the mini batches itself. All other code is vectorized. Yet, it seems to crawl. Most likely the use of ‘lists’ in R, to return multiple values is performance intensive. Some day, I will try to profile the code, and see where the issue is. However the code works!

Having said that, the Confusion Matrix in R dumps a lot of interesting statistics! There is a bunch of statistical measures for each class. For e.g. the Balanced Accuracy for the digits ‘6’ and ‘9’ is around 50%. Looks like, the classifier is confused by the fact that 6 is inverted 9 and vice-versa. The accuracy on the Test data set is just around 75%. I could have played around with the number of layers, number of hidden units, learning rates, epochs etc to get a much higher accuracy. But since each test took about 8+ hours, I may work on this, some other day!

source("DLfunctions5.R")
source("mnist.R")
#Load the mnist data
load_mnist()
show_digit(train$x[2,])
#Set the layer dimensions
layersDimensions=c(784, 15,9, 10) # Works at 1500
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 32768 random samples from MNIST 
permutation = c(sample(2^15))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)

# Execute Stochastic Gradient Descent on the entire training set
# with Softmax activation
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                            hiddenActivationFunc='relu',
                            outputActivationFunc="softmax",
                            learningRate = 0.05,
                            mini_batch_size = 512, 
                            num_epochs = 1, 
                            print_cost = True)

# Compute the Confusion Matrix
library(caret)
library(e1071)
predictions=predictProba(retvalsSGD[['parameters']], X,hiddenActivationFunc='relu',
                   outputActivationFunc="softmax")
confusionMatrix(predictions,Y)
# Confusion Matrix on the Training set
> confusionMatrix(predictions,Y)
Confusion Matrix and Statistics

          Reference
Prediction    0    1    2    3    4    5    6    7    8    9
         0 5738    1   21    5   16   17    7   15    9   43
         1    5 6632   21   24   25    3    2   33   13  392
         2   12   32 5747  106   25   28    3   27   44 4779
         3    0   27   12 5715    1   21    1   20    1   13
         4   10    5   21   18 5677    9   17   30   15  166
         5  142   21   96  136   93 5306 5884   43   60  413
         6    0    0    0    0    0    0    0    0    0    0
         7    6    9   13   13    3    4    0 6085    0   55
         8    8   12    7   43    1   32    2    7 5703   69
         9    2    3   20   71    1    1    2    5    6   19

Overall Statistics
                                          
               Accuracy : 0.777           
                 95% CI : (0.7737, 0.7804)
    No Information Rate : 0.1124          
    P-Value [Acc > NIR] : < 2.2e-16       
                                          
                  Kappa : 0.7524          
 Mcnemar's Test P-Value : NA              

Statistics by Class:

                     Class: 0 Class: 1 Class: 2 Class: 3 Class: 4 Class: 5 Class: 6
Sensitivity           0.96877   0.9837  0.96459  0.93215  0.97176  0.97879  0.00000
Specificity           0.99752   0.9903  0.90644  0.99822  0.99463  0.87380  1.00000
Pos Pred Value        0.97718   0.9276  0.53198  0.98348  0.95124  0.43513      NaN
Neg Pred Value        0.99658   0.9979  0.99571  0.99232  0.99695  0.99759  0.90137
Prevalence            0.09872   0.1124  0.09930  0.10218  0.09737  0.09035  0.09863
Detection Rate        0.09563   0.1105  0.09578  0.09525  0.09462  0.08843  0.00000
Detection Prevalence  0.09787   0.1192  0.18005  0.09685  0.09947  0.20323  0.00000
Balanced Accuracy     0.98314   0.9870  0.93551  0.96518  0.98319  0.92629  0.50000
                     Class: 7 Class: 8  Class: 9
Sensitivity            0.9713  0.97471 0.0031938
Specificity            0.9981  0.99666 0.9979464
Pos Pred Value         0.9834  0.96924 0.1461538
Neg Pred Value         0.9967  0.99727 0.9009521
Prevalence             0.1044  0.09752 0.0991500
Detection Rate         0.1014  0.09505 0.0003167
Detection Prevalence   0.1031  0.09807 0.0021667
Balanced Accuracy      0.9847  0.98568 0.5005701
# Confusion Matrix on the Training set xtest <- t(test$x) Xtest <- xtest[,1:10000] ytest <-test$y ytest1 <- ytest[1:10000] ytest2 <- as.matrix(ytest1) Ytest=t(ytest2)

Confusion Matrix and Statistics

          Reference
Prediction    0    1    2    3    4    5    6    7    8    9
         0  950    2    2    3    0    6    9    4    7    6
         1    3 1110    4    2    9    0    3   12    5   74
         2    2    6  965   21    9   14    5   16   12  789
         3    1    2    9  908    2   16    0   21    2    6
         4    0    1    9    5  938    1    8    6    8   39
         5   19    5   25   35   20  835  929    8   54   67
         6    0    0    0    0    0    0    0    0    0    0
         7    4    4    7   10    2    4    0  952    5    6
         8    1    5    8   14    2   16    2    3  876   21
         9    0    0    3   12    0    0    2    6    5    1

Overall Statistics
                                          
               Accuracy : 0.7535          
                 95% CI : (0.7449, 0.7619)
    No Information Rate : 0.1135          
    P-Value [Acc > NIR] : < 2.2e-16       
                                          
                  Kappa : 0.7262          
 Mcnemar's Test P-Value : NA              

Statistics by Class:

                     Class: 0 Class: 1 Class: 2 Class: 3 Class: 4 Class: 5 Class: 6
Sensitivity            0.9694   0.9780   0.9351   0.8990   0.9552   0.9361   0.0000
Specificity            0.9957   0.9874   0.9025   0.9934   0.9915   0.8724   1.0000
Pos Pred Value         0.9606   0.9083   0.5247   0.9390   0.9241   0.4181      NaN
Neg Pred Value         0.9967   0.9972   0.9918   0.9887   0.9951   0.9929   0.9042
Prevalence             0.0980   0.1135   0.1032   0.1010   0.0982   0.0892   0.0958
Detection Rate         0.0950   0.1110   0.0965   0.0908   0.0938   0.0835   0.0000
Detection Prevalence   0.0989   0.1222   0.1839   0.0967   0.1015   0.1997   0.0000
Balanced Accuracy      0.9825   0.9827   0.9188   0.9462   0.9733   0.9043   0.5000
                     Class: 7 Class: 8  Class: 9
Sensitivity            0.9261   0.8994 0.0009911
Specificity            0.9953   0.9920 0.9968858
Pos Pred Value         0.9577   0.9241 0.0344828
Neg Pred Value         0.9916   0.9892 0.8989068
Prevalence             0.1028   0.0974 0.1009000
Detection Rate         0.0952   0.0876 0.0001000
Detection Prevalence   0.0994   0.0948 0.0029000
Balanced Accuracy      0.9607   0.9457 0.4989384

7. Random dataset with Sigmoid activation – Octave

The Octave code below uses the random data set used by Python. The code below implements a L-Layer Deep Learning with Sigmoid Activation.


source("DL5functions.m")
# Read the data
data=csvread("data.csv");

X=data(:,1:2);
Y=data(:,3);
#Set the layer dimensions
layersDimensions = [2 9 7  1]; #tanh=-0.5(ok), #relu=0.1 best!
# Perform gradient descent 
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
                               hiddenActivationFunc='relu', 
                               outputActivationFunc="sigmoid",
                               learningRate = 0.1,
                               numIterations = 10000);
# Plot cost vs iterations
plotCostVsIterations(10000,costs);       

8. Spiral dataset with Softmax activation – Octave

The  code below uses the spiral data set used by Python above. The code below implements a L-Layer Deep Learning with Softmax Activation.

# Read the data
data=csvread("spiral.csv");

# Setup the data
X=data(:,1:2);
Y=data(:,3);

# Set the number of features, number of hidden units in hidden layer and number of classess
numFeats=2; #No features
numHidden=100; # No of hidden units
numOutput=3; # No of  classes
# Set the layer dimensions
layersDimensions = [numFeats numHidden  numOutput];  
#Perform gradient descent with softmax activation unit
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
                               hiddenActivationFunc='relu', 
                               outputActivationFunc="softmax",
                               learningRate = 0.1,
                               numIterations = 10000); 

9. MNIST dataset with Softmax activation – Octave

The code below implements a L-Layer Deep Learning Network in Octave with Softmax output activation unit, for classifying the 10 handwritten digits in the MNIST dataset. Unfortunately, Octave can only index to around 10000 training at a time,  and I was getting an error ‘error: out of memory or dimension too large for Octave’s index type error: called from…’, when I tried to create a batch size of 20000.  So I had to come with a work around to create a batch size of 10000 (randomly) and then use a mini-batch of 1000 samples and execute Stochastic Gradient Descent. The performance was good. Octave takes about 15 minutes, on a batch size of 10000 and a mini batch of 1000.

I thought if the performance was not good, I could iterate through these random batches and refining the gradients as follows

# Pseudo code that could be used since Octave only allows 10K batches
# at a time
# Randomly create weights
[weights biases] = initialize_weights()
for i=1:k
    # Create a random permutation and create a random batch
    permutation = randperm(10000);
    X=trainX(permutation,:);
    Y=trainY(permutation,:);
    # Compute weights from SGD and update weights in the next batch update
    [weights biases costs]=L_Layer_DeepModel_SGD(X,Y,mini_bactch=1000,weights, biases,...);
    ...
endfor
# Load the MNIST data
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 60K
permutation = randperm(10000);
disp(length(permutation));

# Use this 10K as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];

# Run Stochastic Gradient descent with batch size=10K and mini_batch_size=1000
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
                       hiddenActivationFunc='relu', 
                       outputActivationFunc="softmax",
                       learningRate = 0.01,
                       mini_batch_size = 2000, num_epochs = 5000);   

9. Final thoughts

Here are some of my final thoughts after working on Python, R and Octave in this series and in other projects
1. Python, with its highly optimized numpy library, is ideally suited for creating Deep Learning Models, which have a lot of matrix manipulations. Python is a real workhorse when it comes to Deep Learning computations.
2. R is somewhat clunky in comparison to its cousin Python in handling matrices or in returning multiple values. But R’s statistical libraries, dplyr, and ggplot are really superior to the Python peers. Also, I find R handles  dataframes,  much better than Python.
3. Octave is a no-nonsense,minimalist language which is very efficient in handling matrices. It is ideally suited for implementing Machine Learning and Deep Learning from scratch. But Octave has its problems and cannot handle large matrix sizes, and also lacks the statistical libaries of R and Python. They possibly exist in its sibling, Matlab

Feel free to clone/download the code from  GitHub at DeepLearning-Part5.

Conclusion

Building a Deep Learning Network from scratch is quite challenging, time-consuming but nevertheless an exciting task.  While the statements in the different languages for manipulating matrices, summing up columns, finding columns which have ones don’t take more than a single statement, extreme care has to be taken to ensure that the statements work well for any dimension.  The lessons learnt from creating L -Layer Deep Learning network  are many and well worth it. Give it a try!

Hasta la vista! I’ll be back, so stick around!
Watch this space!

References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. CS231 Convolutional Neural Networks for Visual Recognition
4. Eli Bendersky’s Website – The Softmax function and its derivative

Also see
1. My book ‘Practical Machine Learning with R and Python’ on Amazon
2. Presentation on Wireless Technologies – Part 1
3. Exploring Quantum Gate operations with QCSimulator
4. What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
5. TWS-4: Gossip protocol: Epidemics and rumors to the rescue
6. cricketr plays the ODIs!
7. “Is it an animal? Is it an insect?” in Android
8. The 3rd paperback & kindle editions of my books on Cricket, now on Amazon
9. Deblurring with OpenCV: Weiner filter reloaded
10. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables

To see all posts click Index of Posts

 

Presentation on ‘Machine Learning in plain English – Part 2’

This presentation is a continuation of my earlier presentation Presentation on ‘Machine Learning in plain English – Part 1’. As the title suggests, the presentation is devoid of any math or programming constructs, and just focuses on the concepts and approaches to different Machine Learning algorithms. In this 2nd part, I discuss KNN regression, KNN classification, Cross Validation techniques like (LOOCV, K-Fold)   feature selection methods including best-fit,forward-fit and backward fit and finally Ridge (L2) and Lasso Regression (L1)


If you would like to see the implementations of the discussed algorithms, in this presentation, do check out my book   My book ‘Practical Machine Learning with R and Python’ on Amazon

You may also like
1.A primer on Qubits, Quantum gates and Quantum Operations
2.Deep Learning from first principles in Python, R and Octave – Part 4
3.Latency, throughput implications for the Cloud
4.Brewing a potion with Bluemix, PostgreSQL, Node.js in the cloud
5.Practical Machine Learning with R and Python – Part 6
6. Introducing cricketr! : An R package to analyze performances of cricketers

To see all post click Index of posts

My book ‘Practical Machine Learning with R and Python’ on Amazon

Note: The 3rd edition of this book is now available My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon

My book ‘Practical Machine Learning with R and Python: Second Edition – Machine Learning in stereo’ is now available in both paperback ($10.99) and kindle ($7.99/Rs449) versions. In this book I implement some of the most common, but important Machine Learning algorithms in R and equivalent Python code. This is almost like listening to parallel channels of music in stereo!
1. Practical machine with R and Python: Third Edition – Machine Learning in Stereo(Paperback-$12.99)
2. Practical machine with R and Python Third Edition – Machine Learning in Stereo(Kindle- $8.99/Rs449)
This book is ideal both for beginners and the experts in R and/or Python. Those starting their journey into datascience and ML will find the first 3 chapters useful, as they touch upon the most important programming constructs in R and Python and also deal with equivalent statements in R and Python. Those who are expert in either of the languages, R or Python, will find the equivalent code ideal for brushing up on the other language. And finally,those who are proficient in both languages, can use the R and Python implementations to internalize the ML algorithms better.

Here is a look at the topics covered

Table of Contents
Essential R …………………………………….. 7
Essential Python for Datascience ………………..   54
R vs Python ……………………………………. 77
Regression of a continuous variable ………………. 96
Classification and Cross Validation ……………….113
Regression techniques and regularization …………. 134
SVMs, Decision Trees and Validation curves …………175
Splines, GAMs, Random Forests and Boosting …………202
PCA, K-Means and Hierarchical Clustering …………. 234

Pick up your copy today!!
Hope you have a great time learning as I did while implementing these algorithms!

Neural Networks: The mechanics of backpropagation

The initial work in the  ‘Backpropagation Algorithm’  started in the 1980’s and led to an explosion of interest in Neural Networks and  the application of backpropagation

The ‘Backpropagation’ algorithm computes the minimum of an error function with respect to the weights in the Neural Network. It uses the method of gradient descent. The combination of weights in a multi-layered neural network, which minimizes the error/cost function is considered to be a solution of the learning problem.

neuron-1

In the Neural Network above
out_{o1} =\sum_{i} w_{i}*x_{i}
E = 1/2(target - out)^{2}
\partial E/\partial out= 1/2*2*(target - out) *-1 = -(target - out)
\partial E/\partial w_{i} =\partial E/\partial y* \partial y/\partial w_{i}
\partial E/\partial w_{i} = -(target - out) * x_{i}

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

Perceptrons and single layered neural networks can only classify, if the sample space is linearly separable. For non-linear decision boundaries, a multi layered neural network with  backpropagation is required to generate more complex boundaries.The backpropagation algorithm, computes the minimum of the error function in weight space using the method of gradient descent. This computation of the gradient, requires the activation function to be both differentiable and continuous. Hence the sigmoid or logistic function is typically chosen as the activation function at every layer.

This post looks at a 3 layer neural network with 1 input, 1 hidden and 1 output. To a large extent this post is based on Matt Mazur’s detailed “A step by step backpropagation example“, and Prof Hinton’s “Neural Networks for Machine Learning” at Coursera and a few other sources.

While Matt Mazur’s post uses example values, I generate the formulas for the gradient derivatives for each weight in the hidden and input layers. I intend to implement a vector version of backpropagation in Octave, R and Python. So this post is a prequel to that.

The 3 layer neural network is as below

nn

Some basic derivations which are used in backpropagation

Chain rule of differentiation
Let y=f(u)
and u=g(x) then
\partial y/\partial x = \partial y/\partial u * \partial u/\partial x

An important result
y=1/(1+e^{-z})
Let x= 1 + e^{-z}  then
y = 1/x
\partial y/\partial x = -1/x^{2}
\partial x/\partial z = -e^{-z}

Using the chain rule of differentiation we get
\partial y/\partial z = \partial y/\partial x * \partial x/\partial z
=-1/(1+e^{-z})^{2}* -e^{-z} = e^{-z}/(1+e^{-z})^{2}
Therefore \partial y/\partial z = y(1-y)                                   -(A)

1) Feed forward network
The net output at the 1st hidden layer
in_{h1} = w_{1}i_{1} + w_{2}i_{2} + b_{1}
in_{h2} = w_{3}i_{1} + w_{4}i_{2} + b_{1}

The sigmoid/logistic function function is used to generate the activation outputs for each hidden layer. The sigmoid is chosen because it is continuous and also has a continuous derivative

out_{h1} = 1/1+e^{-in_{h1}}
out_{h2} = 1/1+e^{-in_{h2}}

The net output at the output layer
in_{o1} = w_{5}out_{h_{1}} +  w_{6}out_{h_{2}} + b_{2}
in_{o2} = w_{7}out_{h_{1}} +  w_{8}out_{h_{2}} + b_{2}

Total error
E_{total} = 1/2\sum (target - output)^{2}
E_{total} = E_{o1} + E_{o2}
E_{total} = 1/2(target_{o_{1}} - out_{o_{1}})^{2} + 1/2(target_{o_{2}} - out_{o_{2}})^{2}

2)The backwards pass
In the backward pass we need to compute how the squared error changes with changing weight. i.e we compute \partial E_{total}/\partial w_{i} for each weight w_{i}. This is shown below

A squared error is assumed

Error gradient  with w_{5}

output
 \partial E_{total}/\partial w_{5} = \partial E_{total}/\partial out_{o_{1}} * \partial out_{o_{1}}/\partial in_{o_{1}} * \partial in_{o_{1}}/ \partial w_{5}                -(B)

Since
E_{total} = 1/2\sum (target - output)^{2}
E_{total} = 1/2(target_{o_{1}} - out_{o_{1}})^{2} + 1/2(target_{o_{2}} - out_{o_{2}})^{2}
 \partial E _{total}/\partial out_{o1} = \partial E_{o1}/\partial out_{o1} + \partial E_{o2}/\partial out_{o1}
 \partial E _{total}/\partial out_{o1} = \partial /\partial _{out_{o1}}[1/2(target_{01}-out_{01})^{2}- 1/2(target_{02}-out_{02})^{2}]
 \partial E _{total}/\partial out_{o1} = 2 * 1/2*(target_{01} - out_{01}) *-1 + 0

Now considering the 2nd term in (B)
\partial out_{o1}/\partial in_{o1} = \partial/\partial in_{o1} [1/(1+e^{-in_{o1}})]

Using result (A)
 \partial out_{o1}/\partial in_{o1} = \partial/\partial in_{o1} [1/(1+e^{-in_{o1}})] = out_{o1}(1-out_{o1})

The 3rd term in (B)
 \partial in_{o1}/\partial w_{5} = \partial/\partial w_{5} [w_{5}*out_{h1} + w_{6}*out_{h2}] = out_{h1}
 \partial E_{total}/\partial w_{5}=-(target_{o1} - out_{o1}) * out_{o1} *(1-out_{o1}) * out_{h1}

Having computed \partial E_{total}/\partial w_{5}, we now perform gradient descent, by computing a new weight, assuming a learning rate \alpha
 w_{5}^{+} = w_{5} - \alpha * \partial E_{total}/\partial w_{5}

If we do this for  \partial E_{total}/\partial w_{6} we would get
 \partial E_{total}/\partial w_{6}=-(target_{02} - out_{02}) * out_{02} *(1-out_{02}) * out_{h2}

3)Hidden layer

hidden
We now compute how the total error changes for a change in weight w_{1}
 \partial E_{total}/\partial w_{1}= \partial E_{total}/\partial out_{h1}* \partial out_{h1}/\partial in_{h1} * \partial in_{h1}/\partial w_{1} – (C)

Using
E_{total} = E_{o1} + E_{o2} we get
 \partial E_{total}/\partial w_{1}= (\partial E_{o1}/\partial out_{h1}+  \partial E_{o2}/\partial out_{h1}) * \partial out_{h1}/\partial in_{h1} * \partial in_{h1}/\partial w_{1}
\partial E_{total}/\partial w_{1}=(\partial E_{o1}/\partial out_{h1}+  \partial E_{o2}/\partial out_{h1}) * out_{h1}*(1-out_{h1})*i_{1}     -(D)

Considering the 1st term in (C)
 \partial E_{total}/\partial out_{h1}= \partial E_{o1}/\partial out_{h1}+  \partial E_{o2}/\partial out_{h1}

Now
 \partial E_{o1}/\partial out_{h1} = \partial E_{o1}/\partial out_{o1} *\partial out_{o1}/\partial in_{01} * \partial in_{o1}/\partial out_{h1}
 \partial E_{o2}/\partial out_{h1} = \partial E_{o2}/\partial out_{o2} *\partial out_{o2}/\partial in_{02} * \partial in_{o2}/\partial out_{h1}

which gives the following
 \partial E_{o1}/\partial out_{o1} *\partial out_{o1}/\partial in_{o1} * \partial in_{o1}/\partial out_{h1} =-(target_{o1}-out_{o1}) *out_{o1}(1-out_{o1})*w_{5} – (E)
 \partial E_{o2}/\partial out_{o2} *\partial out_{o2}/\partial in_{02} * \partial in_{o2}/\partial out_{h1} =-(target_{o2}-out_{o2}) *out_{o2}(1-out_{o2})*w_{6} – (F)

Combining (D), (E) & (F) we get
\partial E_{total}/\partial w_{1} = -[(target_{o1}-out_{o1}) *out_{o1}(1-out_{o1})*w_{5} + (target_{o2}-out_{o2}) *out_{o2}(1-out_{o2})*w_{6}]*out_{h1}*(1-out_{h1})*i_{1}

This can be represented as
\partial E_{total}/\partial w_{1} = -\sum_{i}[(target_{oi}-out_{oi}) *out_{oi}(1-out_{oi})*w_{j}]*out_{h1}*(1-out_{h1})*i_{1}

With this derivative a new value of w_{1} is computed
 w_{1}^{+} = w_{1} - \alpha * \partial E_{total}/\partial w_{1}

Hence there are 2 important results
At the output layer we have
a)  \partial E_{total}/\partial w_{j}=-(target_{oi} - out_{oi}) * out_{oi} *(1-out_{oi}) * out_{hi}
At each hidden layer we compute
b) \partial E_{total}/\partial w_{k} = -\sum_{i}[(target_{oi}-out_{oi}) *out_{oi}(1-out_{oi})*w_{j}]*out_{hk}*(1-out_{hk})*i_{k}

Backpropagation, was very successful in the early years,  but the algorithm does have its problems for e.g the issue of the ‘vanishing’ and ‘exploding’ gradient. Yet it is a very key development in Neural Networks, and  the issues with the backprop gradients have been addressed through techniques such as the  momentum method and adaptive learning rate etc.

In this post. I derive the weights at the output layer and the hidden layer. As I already mentioned above, I intend to implement a vector version of the backpropagation algorithm in Octave, R and Python in the days to come.

Watch this space! I’ll be back

P.S. If you find any typos/errors, do let me know!

References
1. Neural Networks for Machine Learning by Prof Geoffrey Hinton
2. A Step by Step Backpropagation Example by Matt Mazur
3. The Backpropagation algorithm by R Rojas
4. Backpropagation Learning Artificial Neural Networks David S Touretzky
5. Artificial Intelligence, Prof Sudeshna Sarkar, NPTEL

Also see my other posts
1. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
2. Design Principles of Scalable, Distributed Systems
3. A method for optimal bandwidth usage by auctioning available bandwidth using the OpenFlow protocol
4. De-blurring revisited with Wiener filter using OpenCV
5. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables
6. Re-introducing cricketr! : An R package to analyze performances of cricketers

To see all my posts go to ‘Index of Posts

Informed choices through Machine Learning-2: Pitting together Kumble, Kapil, Chandra

Continuing my earlier ‘innings’, of test driving my knowledge in Machine Learning acquired via Coursera,  I now turn my attention towards the bowling performances of our Indian bowling heroes. In this post I give a slightly different ‘spin’ to the bowling analysis and hope I can ‘swing’ your opinion based on my assessment.

I guess that is enough of my cricketing ‘double-speak’ for now and I will get down to the real business of my bowling analysis!

If you are passionate about cricket, and love analyzing cricket performances, then check out my 2 racy books on cricket! In my books, I perform detailed yet compact analysis of performances of both batsmen, bowlers besides evaluating team & match performances in Tests , ODIs, T20s & IPL. You can buy my books on cricket from Amazon at $12.99 for the paperback and $4.99/$6.99 respectively for the kindle versions. The books can be accessed at Cricket analytics with cricketr  and Beaten by sheer pace-Cricket analytics with yorkr  A must read for any cricket lover! Check it out!!

1

 

As in my earlier post Informed choices through Machine Learning – Analyzing Kohli, Tendulkar and Dravid ,the first part of the post has my analyses and the latter part has the details of the implementation of the algorithm. Feel free to read the first part and either scan or skip the latter.

To perform this analysis I have skipped the data on our recent crop of new bowlers. The reason being that data is scant on these bowlers, besides they also seem to have a relatively shorter shelf life (hope there are a couple of finds in this Australian tour of Dec 2014). For the analyses I have chosen B S Chandrasekhar, Kapil Dev Anil Kumble. My rationale as to why I chose the above 3

B S Chandrasekhar also known as “Chandra’ was one of the most lethal leg spinners in the late 1970’s. He had a very dangerous combination of fast leg breaks, searing tops spins interspersed with the  occasional googly. On many occasions he would leave most batsmen completely clueless.

Kapil Nikhanj Dev, the Haryana Hurricane who could outwit the most technically sound batsmen  through some really clever bowling. His variations were almost always effective and he would achieve the vital breakthrough outsmarting the opponent.

And finally Anil Kumble, I chose Kumble because in my opinion he is truly the embodiment of the ‘thinking’ bowler. Many times I have seen Kumble repeatedly beat batsmen. It was like he was telling the batsman ‘check’ as he bowled faster leg breaks, flippers, a straighter delivery or top spins before finally crashing into the wickets or trapping the batsmen. It felt he was saying ‘checkmate dude!’

I have taken the data for the 3 bowlers from ESPN Cricinfo. Only the Test matches were considered for the analyses. All tests against all oppositions both at home and away were included

The assumptions taken and basis of the computation is included below
a.The data is based on the following 2 input variables a) Overs bowled b) Runs given. The output variable is ‘Wickets taken’

b.To my surprise I found that in the late 1970’s when BS Chandrasekhar used to bowl, an over had 8 balls for matches in Australia. So, I had to normalize this data for Chandra to make it on par with the others. Hence for Chandra where the overs were made up of 8 balls the overs was calculated as follows
Overs (O) = (Overs * 8)/6

c.The Economy rate E was calculated as below
E = Overs/runs was chosen as input variable to take into account fewer runs given by the bowler

d.The output variable was re-calculated as Strike Rate (SR) to determine the ‘bowling effectiveness’
Strike Rate = Wickets/Overs
(not be confused with a batsman’s strike rate batsman strike rate = runs/ balls faced)

e.Hence the analysis is based on
f(O,E) = SR
An outline of the Octave code and the data used can be cloned from GitHub at ml-bowling-analyze

 1. Surface of Bowling Effectiveness (SBE)
In my earlier post I was able to fit a ‘prediction plane’ based on the minutes at crease, balls faced versus the runs scored. But in this case a plane did not make sense as the wickets can only range from 0 – 10 and in most cases averaging between 3 and 5. So I plot the best fitting 3-D surface over the predicted hypothesis function. The steps performed are

1) The data for the different  bowlers were cleaned with data which indicated (DNB – Did not bowl)
2) The Economy Rate (E) = Runs given/Overs and Strike Rate(SR) = Wickets/overs were calculated.
3) The product of Overs (O), and Economy(E) were stored as Over_Economy(OE)
4) The hypothesis function was computed as h(O, E, OE) = y
5) Theta was calculated using the Normal Equation. The Surface of Bowling Effectiveness( SBE) was then plotted. The plots for each of the bowler is shown below

Here are the plots

A) Anil Kumble
The  data of Kumble, based on Overs bowled & Economy rate versus the Strike Rate is plotted as a 3-D scatter plot (pink crosses). The best fit as determined by solving the optimum theta using the Normal Equation is plotted as 3-D surface shown below.
kumble-1
The 3-D surface is what I have termed as ‘Surface of Bowling Effectiveness (SBE)’ as it depicts bowlers overall effectiveness as it plots the overs (O), ‘economy rate’ E against predicted ‘strike rate’ SR.
Here is another view
kumble-2
The theta values obtained for Kumble are
Theta =
0.104208
-0.043769
-0.016305
0.011949

And the cost at this theta is
Cost Function J = 0.0046269

B) B S Chandrasekhar
Here are the best optimal surface plot for Chandra with the data on O,E vs SR plotted as a 3D scatter plot.  Note: The dataset for  Chandrasekhar is smaller compared to the other two.
chandra-1Another view for Chandra
chandra-2

Theta values for B S Chandrasekhar are
Theta =
0.095780
-0.025377
-0.024847
0.023415
and the cost is
Cost Function J = 0.0032980

c) Kapil Dev
The plots  for Kapil
kapil-1
Another view of SBE for Kapil
kapil-2
The Theta values and cost function for Kapil are
Theta =
0.090219
0.027725
0.023894
-0.021434
Cost Function J = 0.0035123

2. Predicting wickets
In the previous section the optimum theta with the lowest Cost Function J was calculated. Based on the value of theta, the wickets that will be taken by a bowler can be computed as the product of the hypothesis function and theta. i.e.

y= h(x) * theta  => Strike Rate (SR) = [1 O E OE] * theta
Now predicted wickets can be calculated as

wickets = Strike rate(SR) * Overs(O)
This is done  for Kumble, Chandra and Kapil  for different combinations of Overs(O) and Economy(E) rate.

Here are the results
Predicted wickets for Anil Kumble
The plot of predicted wickets for Kumble is represented below
kumble-wickets-1
This can also be represented as a a table
kumble-wkts-tbl

Predicted wickets for B S Chandrasekhar
chandra-wickets-1
The table for Chandra
chandra-wkts-tbl
 Predicted wickets for Kapil Dev

The plot
kapil-wicket-2

The predicted table from the hypothesis function for Kapil Dev
kapil-wkts-tbl

Observation: A closer look at  the predicted wickets for Kapil, Kumble and B S Chandra shows an interesting aspect. The predicted number of wickets is higher for lower economy rates. With a little thought we can see bowlers on turning or pitches with a lot of movement can not only be more economical but can also be destructive and take a lot of wickets. Hence the higher wickets for lower economy rates!

Implementation details
In this post I have used the Normal Equation to get the optimal values of theta for local minimum of the Gradient function.  As mentioned above when I had run the 3D scatter plot fitting a 2D plane did not seem quite right. So I had to experiment with different polynomial equations first trying 2nd order, 3rd order and also the sqrt

I tried the following where ‘O is Overs, ‘E’ stands for Economy Rate and ‘SR’ the predicated Strike rate. Theta is the computed theta from the Normal Equation. The notation in  Matrix notation is shown below

i) A linear plane
SR = [1 O E] * theta

ii) Using the sqrt function
SR = [1 sqrt(O) sqrt(E)]  * theta

iii) Using 2nd order plynomial
SR = [1 O^2 E^2] * theta

iv) Using the 3rd order polynomial
SR = [1 O^3 E^3] * theta

v) Before finally settling on
SR = [1 O E OE] * theta

where OE  = O .* E

The last one seemed to give me the lowest cost and also seemed the most logical visual choice.

A good resource to play around with different functions and check out the shapes of combinations of variables and polynomial order of equation is at WolframAlpha: Plotting and Graphics

Note 1: The gradient descent with the Normal Equation has been performed on the entire data set (approx 220 for Kumble & Kapil) and 99 for Chandra. The proper process for verifying a Machine Learning algorithm is to split the data set into (60% training data, 20% cross validation data and 20% as the test set).  We need to validate the prediction function against the cross-validation set, fine tune it and finally ensure that it  fits  the test set samples well.  However, this split was not done as the data set itself was very low. The entire data set was used to perform the optimal surface fit

Note 2: The optimal theta values have been chosen with a feature vector that is of the form
[1 x y x .* y] The Surface of  Bowling Effectiveness’ has been plotted above. It may appear that there is a’high bias’ in the fit and an even better fit could be obtained by choosing higher order polynomials like
[1 x y x*y x^2 y^2 (x^2) .* y x  .* (y^2)] or
[1 x y x*y x^2 y^2 x^3 y^3]  etc
While we can get a better fit we could run into the problem of ‘high variance; and without the cross validation and test set we will not be able to verify the results, Hence the simpler option [1 x y x*y] was chosen

The Octave code outline and the data used can be cloned from GitHub at ml-bowling-analyze

 Conclusion:

1) Predicted wickets: The predicted number of wickets is higher at lower economy rates
2) Comparing performances: There are different ways of looking at the results. One possible way is to check for a particular number of overs and economy rate who is most effective. Here is one way. Taking a small slice from each bowler’s predicted wickets table for anm Economy Rate=4.0 the predicted wickets are

comp

From the above it does appear that Kapil is definitely more effective than the other two. However one could slice and dice in different ways, maybe the most economical for a given numbers and wickets combination or wickets taken in the least overs etc. Do add your thoughts. comments on my assessment or analysis

Also see
1. Analyzing cricket’s batting legends – Through the mirage with R
2. Masters of spin: Unraveling the web with R

You may also like
1. A peek into literacy in India:Statistical learning with R
2. A crime map of India in R: Crimes against women
3.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1

Informed choices through Machine Learning – Analyzing Kohli, Tendulkar and Dravid

Having just completed the highly stimulating & inspiring Stanford’s Machine Learning course at Coursera, by the incomparable Professor Andrew Ng I wanted to give my newly acquired knowledge a try. As a start, I decided to try my hand at  analyzing one of India’s fastest growing stars, namely Virat Kohli . For the data on Virat Kohli I used the ‘Statistics database’ at ESPN Cricinfo. To make matters more interesting,  I also pulled data on the iconic  Sachin Tendulkar and the Mr. Dependable,  Rahul Dravid.

If you are passionate about cricket, and love analyzing cricket performances, then check out my 2 racy books on cricket! In my books, I perform detailed yet compact analysis of performances of both batsmen, bowlers besides evaluating team & match performances in Tests , ODIs, T20s & IPL. You can buy my books on cricket from Amazon at $12.99 for the paperback and $4.99/$6.99 respectively for the kindle versions. The books can be accessed at Cricket analytics with cricketr  and Beaten by sheer pace-Cricket analytics with yorkr  A must read for any cricket lover! Check it out!!

1

(Also do check out my R package cricketr  Introducing cricketr! : An R package to analyze performances of cricketers and my interactive Shiny app implementation using my R package cricketr  – Sixer – R package cricketr’s new Shiny avatar )

Based on the data of these batsmen I perform some predictions with the help of machine learning algorithms. That I have a proclivity for prediction, is not surprising, considering the fact that my Dad was an astrologer who had reasonable success at this esoteric art. While he would be concerned with planetary positions, about Rahu in the 7th house being in the malefic etc., I on the other hand focus my  predictions on multivariate regression analysis and K-Means. The first part of my post gives the results of my analysis and some predictions for Kohli, Tendulkar and Dravid.

The second part of the post contains a brief outline of the implementation and not the actual details of implementation. This is ensure that I don’t violate Coursera’s Machine Learning’ Honor Code.

This code, data used and the output obtained  can be accessed at GitHub at ml-cricket-analysis

Analysis and prediction of Kohli, Tendulkar and Dravid with Machine Learning As mentioned above, I pulled the data for the 3 cricketers Virat Kohli, Sachin Tendulkar and Rahul Dravid. The data taken from Cricinfo database for the 3 batsman is based on  the following assumptions

  • Only ‘Minutes at Crease’ and ‘Balls Faced’ were taken as features against the output variable ‘Runs scored’
  • Only test matches were taken. This included both test ‘at home’ and ‘away tests’
  • The data was cleaned to remove any DNB (did not bat) values
  • No extra weightage was given to ‘not out’. So if Kohli made ‘28*’ 28 not out, this was taken to be 28 runs

 Regression Analysis for Virat Kohli There are 51 data points for Virat Kohli regarding Tests played. The data for Kohli is displayed as a 3D scatter plot where x-axis is ‘minutes’ and y-axis is ‘balls faced’. The vertical z-axis is the ‘runs scored’. Multivariate regression analysis was performed to find the best fitting plane for the runs scored based on the selected features of ‘minutes’ and ‘balls faced’.

This is based on minimizing the cost function and then performing gradient descent for 400 iterations to check for convergence. This plane is shown as the 3-D plane that provides the best fit for the data points for Kohli. The diagram below shows the prediction plane of  expected runs for a combination of ‘minutes at crease’ and ‘balls faced’. Here are 2 such plots for Virat Kohli. kohliAnother view of the prediction plane kohli-1 Prediction for Kohli I have also computed the predicted runs that will be scored by Kohli for different combinations of ‘minutes at crease’ and ‘balls faced’. As an example, from the table below, we can see that the predicted runs for Kohli   after being in the crease for 110 minutes  and facing 135 balls is 54 runs. kohli-score Regression analysis for Sachin Tendulkar There was a lot more data on Tendulkar and I was able to dump close to 329 data points. As before the ‘minutes at crease’, ‘balls faced’ vs ‘runs scored’ were plotted as a 3D scatter plot. The prediction plane is calculated using gradient descent and is shown as a plane in the diagram below srt Another view of this below srt-1 Predicted runs for Tendulkar The table below gives the predicted runs for Tendulkar for a combination of  time at crease and balls faced.  Hence,  Tendulkar will score 57 runs in 110 minutes after  facing 135 deliveries srt-score Regression Analysis for Rahul Dravid The same was done for ‘the Wall’ Dravid. The prediction plane is below dravid dravid-1 Predicted runs for Dravid The predicted runs for Dravid for combinations of batting time and balls faced is included below.  The predicted runs for Dravid after facing 135 deliveries in 110 minutes is 44. dravid-scoreFurther analysis While the ‘prediction plane’ was useful,  it somehow does not give a clear picture of how effective each batsman is. Clearly the 3D plots show at least 3 clusters for each batsman. For all batsmen, the clustering is densest near the origin, become less dense towards the middle and sparse on the other end. This is an indication during which session during their innings the batsman is most prone to get out. So I decided to perform K-Means clustering on the data for the 3 batsman. This gives the 3 general tendencies  for each batsman. The output is included below

K-Means for Virat The K-Means for Virat Kohli indicate the follow

Centroids found 255.000000 104.478261 19.900000
Centroids found 194.000000 80.000000 15.650000
Centroids found 103.000000 38.739130 7.000000

Analysis of Virat Kohli’s batting tendency
Kohli has a 45.098 percent tendency to bat for 104 minutes,  face 80 balls and score 38 runs
Kohli has a 39.216 percent tendency to bat for 19 minutes, face 15 balls and score 7 runs
Kohli has a 15.686 percent tendency to bat for 255 minutes, face 194 balls and score 103 runs

The computation of this included in the diagram below

kohli-kmeans

K-means for Sachin Tendulkar

The K-Means for Sachin Tendulkar indicate the following

Centroids found 166.132530 353.092593 43.748691
Centroids found 121.421687 250.666667 30.486911
Centroids found 65.180723 138.740741 15.748691

Analysis of Sachin Tendulkar’s performance

Tendulkar has a 58.232 percent tendency to bat for 43 minutes, face 30 balls and score 15 runs
Tendulkar has a 25.305 percent tendency to bat for 166 minutes, face 121 balls and score 65 runs
Tendulkar has a 16.463 percent tendency to bat for 353 minutes, face 250 balls and score 138 runs
srt-kmeans K-Means for Rahul Dravid

Centroids found 191.836364 409.000000 50.506024
Centroids found 137.381818 290.692308 34.493976
Centroids found 56.945455 131.500000 13.445783

Analysis of Rahul Dravid’s performance
Dravid has a 50.610 percent tendency to bat for 50 minutes,  face 34 balls and score 13 runs
Dravid has a 33.537 percent tendency to bat for 191 minutes,  face 137 balls and score 56 runs
Dravid has a 15.854 percent tendency to bat for 409 minutes, face 290 balls and score 131 runs
dravid-kmeans Some implementation details The entire analysis and coding was done with Octave 3.2.4. I have included the outline of the code for performing the multivariate regression. In essence the pseudo code for this

  1. Read the batsman data (Minutes, balls faced versus Runs scored)
  2. Calculate the cost
  3. Perform Gradient descent

The cost was plotted against the number of iterations to ensure convergence while performing gradient descent convergence-kohli Plot the 3-D plane that best fits the data
The outline of this code, data used and the output obtained  can be accessed at GitHub at ml-cricket-analysis

Conclusion: Comparing the results from the K-Means Tendulkar has around 48% to make a score greater than 60
Tendulkar has a 25.305 percent tendency to bat for 166 minutes, face 121 balls and score 65 runs
Tendulkar has a 16.463 percent tendency to bat for 353 minutes, face 250 balls and score 138 runs

And Dravid has a similar 48% tendency to score greater than 56 runs
Dravid has a 33.537 percent tendency to bat for 191 minutes,  face 137 balls and score 56 runs
Dravid has a 15.854 percent tendency to bat for 409 minutes, face 290 balls and score 131 runs

Kohli has around 45% to score greater than 38 runs
Kohli has a 45.098 percent tendency to bat for 104 minutes,  face 80 balls and score 38 runs

Also Kohli has a lesser percentage to score lower runs as against the other two
Kohli has a 39.216 percent tendency to bat for 19 minutes, face 15 balls and score 7 runs

The results must be looked in proper perspective as Kohli is just starting his career while the other 2 are veterans. Kohli has a long way to go and I am certain that he will blaze a trail of glory in the years to come!

Watch this space!

Also see
1. My book ‘Practical Machine Learning with R and Python’ on Amazon
2.Introducing cricketr! : An R package to analyze performances of cricketers
3.Informed choices with Machine Learning 2 – Pitting together Kumble, Kapil and Chandra
4. Analyzing cricket’s batting legends – Through the mirage with R
5. What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
6. Bend it like Bluemix, MongoDB with autoscaling – Part 1