Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8

“Lights, camera and … action – Take 4+!”

This post includes  a rework of all presentation of ‘Elements of Neural Networks and Deep  Learning Parts 1-8 ‘ since my earlier presentations had some missing parts, omissions and some occasional errors. So I have re-recorded all the presentations.
This series of presentation will do a deep-dive  into Deep Learning networks starting from the fundamentals. The equations required for performing learning in a L-layer Deep Learning network  are derived in detail, starting from the basics. Further, the presentations also discuss multi-class classification, regularization techniques, and gradient descent optimization methods in deep networks methods. Finally the presentations also touch on how  Deep Learning Networks can be tuned.

The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

1. Elements of Neural Networks and Deep Learning – Part 1
This presentation introduces Neural Networks and Deep Learning. A look at history of Neural Networks, Perceptrons and why Deep Learning networks are required and concluding with a simple toy examples of a Neural Network and how they compute. This part also includes a small digression on the basics of Machine Learning and how the algorithm learns from a data set

2. Elements of Neural Networks and Deep Learning – Part 2
This presentation takes logistic regression as an example and creates an equivalent 2 layer Neural network. The presentation also takes a look at forward & backward propagation and how the cost is minimized using gradient descent


The implementation of the discussed 2 layer Neural Network in vectorized R, Python and Octave are available in my post ‘Deep Learning from first principles in Python, R and Octave – Part 1‘

3. Elements of Neural Networks and Deep Learning – Part 3
This 3rd part, discusses a primitive neural network with an input layer, output layer and a hidden layer. The neural network uses tanh activation in the hidden layer and a sigmoid activation in the output layer. The equations for forward and backward propagation are derived.


To see the implementations for the above discussed video see my post ‘Deep Learning from first principles in Python, R and Octave – Part 2

4. Elements of Neural Network and Deep Learning – Part 4
This presentation is a continuation of my 3rd presentation in which I derived the equations for a simple 3 layer Neural Network with 1 hidden layer. In this video presentation, I discuss step-by-step the derivations for a L-Layer, multi-unit Deep Learning Network, with any activation function g(z)


The implementations of L-Layer, multi-unit Deep Learning Network in vectorized R, Python and Octave are available in my post Deep Learning from first principles in Python, R and Octave – Part 3

5. Elements of Neural Network and Deep Learning – Part 5
This presentation discusses multi-class classification using the Softmax function. The detailed derivation for the Jacobian of the Softmax is discussed, and subsequently the derivative of cross-entropy loss is also discussed in detail. Finally the final set of equations for a Neural Network with multi-class classification is derived.


The corresponding implementations in vectorized R, Python and Octave are available in the following posts
a. Deep Learning from first principles in Python, R and Octave – Part 4
b. Deep Learning from first principles in Python, R and Octave – Part 5

6. Elements of Neural Networks and Deep Learning – Part 6
This part discusses initialization methods specifically like He and Xavier. The presentation also focuses on how to prevent over-fitting using regularization. Lastly the dropout method of regularization is also discussed


The corresponding implementations in vectorized R, Python and Octave of the above discussed methods are available in my post Deep Learning from first principles in Python, R and Octave – Part 6

7. Elements of Neural Networks and Deep Learning – Part 7
This presentation introduces exponentially weighted moving average and shows how this is used in different approaches to gradient descent optimization. The key techniques discussed are learning rate decay, momentum method, rmsprop and adam.

The equivalent implementations of the gradient descent optimization techniques in R, Python and Octave can be seen in my post Deep Learning from first principles in Python, R and Octave – Part 7

8. Elements of Neural Networks and Deep Learning – Part 8
This last part touches on the method to adopt while tuning hyper-parameters in Deep Learning networks

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

This concludes this series of presentations on “Elements of Neural Networks and Deep Learning’

Also
1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
2. Introducing cricpy:A python package to analyze performances of cricketers
3. Natural language processing: What would Shakespeare say?
4. Big Data-2: Move into the big league:Graduate from R to SparkR
5. Presentation on Wireless Technologies – Part 1
6. Introducing cricketr! : An R package to analyze performances of cricketers

To see all posts click Index of posts

My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 6,7,8

This is the final set of presentations in my series ‘Elements of Neural Networks and Deep Learning’. This set follows the earlier 2 sets of presentations namely
1. My presentations on ‘Elements of Neural Networks & Deep Learning’ -Part1,2,3
2. My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 4,5

In this final set of presentations I discuss initialization methods, regularization techniques including dropout. Next I also discuss gradient descent optimization methods like momentum, rmsprop, adam etc. Lastly, I briefly also touch on hyper-parameter tuning approaches. The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

1. Elements of Neural Networks and Deep Learning – Part 6
This part discusses initialization methods specifically like He and Xavier. The presentation also focuses on how to prevent over-fitting using regularization. Lastly the dropout method of regularization is also discusses


The corresponding implementations in vectorized R, Python and Octave of the above discussed methods are available in my post Deep Learning from first principles in Python, R and Octave – Part 6

2. Elements of Neural Networks and Deep Learning – Part 7
This presentation introduces exponentially weighted moving average and shows how this is used in different approaches to gradient descent optimization. The key techniques discussed are learning rate decay, momentum method, rmsprop and adam.


The equivalent implementations of the gradient descent optimization techniques in R, Python and Octave can be seen in my post Deep Learning from first principles in Python, R and Octave – Part 7

3. Elements of Neural Networks and Deep Learning – Part 8
This last part touches upon hyper-parameter tuning in Deep Learning networks


This concludes this series of presentations on “Elements of Neural Networks and Deep Learning’

Important note: Do check out my later version of these videos at Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8 . These have more content and also include some corrections. Check it out!

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and and in kindle version($9.99/Rs449).

See also
1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
2. Big Data-1: Move into the big league:Graduate from Python to Pyspark
3. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
4. Revisiting crimes against women in India
5. Introducing cricket package yorkr: Part 1- Beaten by sheer pace!
6. Deblurring with OpenCV: Weiner filter reloaded
7. Taking a closer look at Quantum gates and their operations

To see all posts click Index of posts

My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 4,5

This is the next set of presentations on “Elements of Neural Networks and Deep Learning”.  In the 4th presentation I discuss and derive the generalized equations for a multi-unit, multi-layer Deep Learning network.  The 5th presentation derives the equations for a Deep Learning network when performing multi-class classification along with the derivations for cross-entropy loss. The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

Important note: Do check out my later version of these videos at Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8 . These have more content and also include some corrections. Check it out!

1. Elements of Neural Network and Deep Learning – Part 4
This presentation is a continuation of my 3rd presentation in which I derived the equations for a simple 3 layer Neural Network with 1 hidden layer. In this video presentation, I discuss step-by-step the derivations for a L-Layer, multi-unit Deep Learning Network, with any activation function g(z)


The implementations of L-Layer, multi-unit Deep Learning Network in vectorized R, Python and Octave are available in my post Deep Learning from first principles in Python, R and Octave – Part 3

2. Elements of Neural Network and Deep Learning – Part 5
This presentation discusses multi-class classification using the Softmax function. The detailed derivation for the Jacobian of the Softmax is discussed, and subsequently the derivative of cross-entropy loss is also discussed in detail. Finally the final set of equations for a Neural Network with multi-class classification is derived.


The corresponding implementations in vectorized R, Python and Octave are available in the following posts
a. Deep Learning from first principles in Python, R and Octave – Part 4
b. Deep Learning from first principles in Python, R and Octave – Part 5

To be continued. Watch this space!

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

Also see
1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
2. Big Data-2: Move into the big league:Graduate from R to SparkR
3. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
4. My TEDx talk on the “Internet of Things
5. Rock N’ Roll with Bluemix, Cloudant & NodeExpress
6. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables
7. Literacy in India – A deepR dive
8. Fun simulation of a Chain in Android

To see all posts click Index of Posts

My presentations on ‘Elements of Neural Networks & Deep Learning’ -Part1,2,3

I will be uploading a series of presentations on ‘Elements of Neural Networks and Deep Learning’. In these video presentations I discuss the derivations of L -Layer Deep Learning Networks, starting from the basics. The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

1. Elements of Neural Networks and Deep Learning – Part 1
This presentation introduces Neural Networks and Deep Learning. A look at history of Neural Networks, Perceptrons and why Deep Learning networks are required and concluding with a simple toy examples of a Neural Network and how they compute

2. Elements of Neural Networks and Deep Learning – Part 2
This presentation takes logistic regression as an example and creates an equivalent 2 layer Neural network. The presentation also takes a look at forward & backward propagation and how the cost is minimized using gradient descent


The implementation of the discussed 2 layer Neural Network in vectorized R, Python and Octave are available in my post ‘Deep Learning from first principles in Python, R and Octave – Part 1

3. Elements of Neural Networks and Deep Learning – Part 3
This 3rd part, discusses a primitive neural network with an input layer, output layer and a hidden layer. The neural network uses tanh activation in the hidden layer and a sigmoid activation in the output layer. The equations for forward and backward propagation are derived.


To see the implementations for the above discussed video see my post ‘Deep Learning from first principles in Python, R and Octave – Part 2

Important note: Do check out my later version of these videos at Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8 . These have more content and also include some corrections. Check it out!

To be continued. Watch this space!

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

You may also like
1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
2. Introducing cricpy:A python package to analyze performances of cricketers
3. Natural language processing: What would Shakespeare say?
4. TWS-4: Gossip protocol: Epidemics and rumors to the rescue
5. Getting started with memcached-libmemcached
6. Simplifying ML: Impact of degree of polynomial degree on bias & variance and other insights

To see all posts click Index of posts

My book ‘Deep Learning from first principles:Second Edition’ now on Amazon

The second edition of my book ‘Deep Learning from first principles:Second Edition- In vectorized Python, R and Octave’, is now available on Amazon, in both paperback ($18.99)  and kindle ($9.99/Rs449/-)  versions. Since this book is almost 70% code, all functions, and code snippets have been formatted to use the fixed-width font ‘Lucida Console’. In addition line numbers have been added to all code snippets. This makes the code more organized and much more readable. I have also fixed typos in the book

Untitled

 

The book includes the following chapters

Table of Contents
Preface 4
Introduction 6
1. Logistic Regression as a Neural Network 8
2. Implementing a simple Neural Network 23
3. Building a L- Layer Deep Learning Network 48
4. Deep Learning network with the Softmax 85
5. MNIST classification with Softmax 103
6. Initialization, regularization in Deep Learning 121
7. Gradient Descent Optimization techniques 167
8. Gradient Check in Deep Learning 197
1. Appendix A 214
2. Appendix 1 – Logistic Regression as a Neural Network 220
3. Appendix 2 - Implementing a simple Neural Network 227
4. Appendix 3 - Building a L- Layer Deep Learning Network 240
5. Appendix 4 - Deep Learning network with the Softmax 259
6. Appendix 5 - MNIST classification with Softmax 269
7. Appendix 6 - Initialization, regularization in Deep Learning 302
8. Appendix 7 - Gradient Descent Optimization techniques 344
9. Appendix 8 – Gradient Check 405
References 475

Also see
1. My book ‘Practical Machine Learning in R and Python: Second edition’ on Amazon
2. The 3rd paperback & kindle editions of my books on Cricket, now on Amazon
3. De-blurring revisited with Wiener filter using OpenCV
4. TWS-4: Gossip protocol: Epidemics and rumors to the rescue
5. A Cloud medley with IBM Bluemix, Cloudant DB and Node.js
6. Practical Machine Learning with R and Python – Part 6
7. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables
8. Fun simulation of a Chain in Android

To see posts click Index of Posts

My book “Deep Learning from first principles” now on Amazon

Note: The 2nd edition of this book is now available on Amazon

My 4th book(self-published), “Deep Learning from first principles – In vectorized Python, R and Octave” (557 pages), is now available on Amazon in both paperback ($18.99) and kindle ($9.99/Rs449). The book starts with the most primitive 2-layer Neural Network and works  its way to a generic L-layer Deep Learning Network, with all the bells and whistles.  The book includes detailed derivations and vectorized implementations in Python, R and Octave.  The code has been extensively  commented and has been included in the Appendix section.

Pick up your copy today!!!

My other books
1. Practical Machine Learning with R and Python
2. Beaten by sheer pace – Cricket analytics with yorkr
3. Cricket analytics with cricketr

Deep Learning from first principles in Python, R and Octave – Part 8

1. Introduction

You don’t understand anything until you learn it more than one way. Marvin Minsky
No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either. Marvin Minsky
A wealth of information creates a poverty of attention. Herbert Simon

This post, Deep Learning from first Principles in Python, R and Octave-Part8, is my final post in my Deep Learning from first principles series. In this post, I discuss and implement a key functionality needed while building Deep Learning networks viz. ‘Gradient Checking’. Gradient Checking is an important method to check the correctness of your implementation, specifically the forward propagation and the backward propagation cycles of an implementation. In addition I also discuss some tips for tuning hyper-parameters of a Deep Learning network based on my experience.

My post in this  ‘Deep Learning Series’ so far were
1. Deep Learning from first principles in Python, R and Octave – Part 1 In part 1, I implement logistic regression as a neural network in vectorized Python, R and Octave
2. Deep Learning from first principles in Python, R and Octave – Part 2 In the second part I implement a simple Neural network with just 1 hidden layer and a sigmoid activation output function
3. Deep Learning from first principles in Python, R and Octave – Part 3 The 3rd part implemented a multi-layer Deep Learning Network with sigmoid activation output in vectorized Python, R and Octave
4. Deep Learning from first principles in Python, R and Octave – Part 4 The 4th part deals with multi-class classification. Specifically, I derive the Jacobian of the Softmax function and enhance my L-Layer DL network to include Softmax output function in addition to Sigmoid activation
5. Deep Learning from first principles in Python, R and Octave – Part 5 This post uses the Softmax classifier implemented to classify MNIST digits using a L-layer Deep Learning network
6. Deep Learning from first principles in Python, R and Octave – Part 6 The 6th part adds more bells and whistles to my L-Layer DL network, by including different initialization types namely He and Xavier. Besides L2 Regularization and random dropout is added.
7. Deep Learning from first principles in Python, R and Octave – Part 7 The 7th part deals with Stochastic Gradient Descent Optimization methods including momentum, RMSProp and Adam
8. Deep Learning from first principles in Python, R and Octave – Part 8 – This post implements a critical function for ensuring the correctness of a L-Layer Deep Learning network implementation using Gradient Checking

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

You may also like my companion book “Practical Machine Learning with R and Python- Machine Learning in stereo” available in Amazon in paperback($9.99) and Kindle($6.99) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.

Gradient Checking is based on the following approach. One iteration of Gradient Descent computes and updates the parameters \theta by doing
\theta := \theta - \frac{d}{d\theta}J(\theta).
To minimize the cost we will need to minimize J(\theta)
Let g(\theta) be a function that computes the derivative \frac {d}{d\theta}J(\theta). Gradient Checking allows us to numerically evaluate the implementation of the function g(\theta) and verify its correctness.
We know the derivative of a function is given by
\frac {d}{d\theta}J(\theta) = lim->0 \frac {J(\theta +\epsilon) - J(\theta -\epsilon)} {2*\epsilon}
Note: The above derivative is based on the 2 sided derivative. The 1-sided derivative  is given by \frac {d}{d\theta}J(\theta) = lim->0 \frac {J(\theta +\epsilon) - J(\theta)} {\epsilon}
Gradient Checking is based on the 2-sided derivative because the error is of the order O(\epsilon^{2}) as opposed O(\epsilon) for the 1-sided derivative.
Hence Gradient Check uses the 2 sided derivative as follows.
g(\theta) = lim->0 \frac {J(\theta +\epsilon) - J(\theta -\epsilon)} {2*\epsilon}

In Gradient Check the following is done
A) Run one normal cycle of your implementation by doing the following
a) Compute the output activation by running 1 cycle of forward propagation
b) Compute the cost using the output activation
c) Compute the gradients using backpropation (grad)

B) Perform gradient check steps as below
a) Set \theta . Flatten all ‘weights’ and ‘bias’ matrices and vectors to a column vector.
b) Initialize \theta+ by bumping up \theta by adding \epsilon (\theta + \epsilon)
c) Perform forward propagation with \theta+
d) Compute cost with \theta+ i.e. J(\theta+)
e) Initialize  \theta- by bumping down \theta by subtracting \epsilon (\theta - \epsilon)
f) Perform forward propagation with \theta-
g) Compute cost with \theta- i.e.  J(\theta-)
h) Compute \frac {d} {d\theta} J(\theta) or ‘gradapprox’ as\frac {J(\theta+) - J(\theta-) } {2\epsilon} using the 2 sided derivative.
i) Compute L2norm or the Euclidean distance between ‘grad’ and ‘gradapprox’. If the
diference is of the order of 10^{-5} or 10^{-7} the implementation is correct. In the Deep Learning Specialization Prof Andrew Ng mentions that if the difference is of the order of 10^{-7} then the implementation is correct. A difference of 10^{-5} is also ok. Anything more than that is a cause of worry and you should look at your code more closely. To see more details click Gradient checking and advanced optimization

You can clone/download the code from Github at DeepLearning-Part8

After spending a better part of 3 days, I now realize how critical Gradient Check is for ensuring the correctness of you implementation. Initially I was getting very high difference and did not know how to understand the results or debug my implementation. After many hours of staring at the results, I  was able to finally arrive at a way, to localize issues in the implementation. In fact, I did catch a small bug in my Python code, which did not exist in the R and Octave implementations. I will demonstrate this below

1.1a Gradient Check – Sigmoid Activation – Python

import numpy as np
import matplotlib

exec(open("DLfunctions8.py").read())
exec(open("testcases.py").read())
#Load the data
train_X, train_Y, test_X, test_Y = load_dataset()
#Set layer dimensions
layersDimensions = [2,4,1]  
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
AL, caches, dropoutMat = forwardPropagationDeep(train_X, parameters, keep_prob=1, hiddenActivationFunc="relu",outputActivationFunc="sigmoid")
#Compute cost
cost = computeCost(AL, train_Y, outputActivationFunc="sigmoid") 
print("cost=",cost)
#Perform backprop and get gradients
gradients = backwardPropagationDeep(AL, train_Y, caches, dropoutMat, lambd=0, keep_prob=1,                                   hiddenActivationFunc="relu",outputActivationFunc="sigmoid")

epsilon = 1e-7
outputActivationFunc="sigmoid"

# Set-up variables
# Flatten parameters to a vector
parameters_values, _ = dictionary_to_vector(parameters)
#Flatten gradients to a vector
grad = gradients_to_vector(parameters,gradients)
num_parameters = parameters_values.shape[0]
#Initialize
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))

# Compute gradapprox using 2 sided derivative
for i in range(num_parameters):
    # Compute J_plus[i]. 
    thetaplus = np.copy(parameters_values)                                   
    thetaplus[i][0] = thetaplus[i][0] + epsilon                                 
    AL, caches, dropoutMat = forwardPropagationDeep(train_X, vector_to_dictionary(parameters,thetaplus), keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)
    J_plus[i] = computeCost(AL, train_Y, outputActivationFunc=outputActivationFunc) 
    
    
    # Compute J_minus[i]. 
    thetaminus = np.copy(parameters_values)                                     
    thetaminus[i][0] = thetaminus[i][0] - epsilon                                     
    AL, caches, dropoutMat  = forwardPropagationDeep(train_X, vector_to_dictionary(parameters,thetaminus), keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)     
    J_minus[i] = computeCost(AL, train_Y, outputActivationFunc=outputActivationFunc)                            
       
    # Compute gradapprox[i]   
    gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon)

# Compare gradapprox to backward propagation gradients by computing difference. 
numerator = np.linalg.norm(grad-gradapprox)                                           
denominator = np.linalg.norm(grad) +  np.linalg.norm(gradapprox)                                         
difference =  numerator/denominator                                        

#Check the difference
if difference > 1e-5:
    print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
    print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
print(difference)
print("\n")    
# The technique below can be used to identify 
# which of the parameters are in error
# Covert grad to dictionary
m=vector_to_dictionary2(parameters,grad)
print("Gradients from backprop")
print(m)
print("\n")
# Convert gradapprox to dictionary
n=vector_to_dictionary2(parameters,gradapprox)
print("Gradapprox from gradient check")
print(n)
## (300, 2)
## (300,)
## cost= 0.6931455556341791
## [92mYour backward propagation works perfectly fine! difference = 1.1604150683743381e-06[0m
## 1.1604150683743381e-06
## 
## 
## Gradients from backprop
## {'dW1': array([[-6.19439955e-06, -2.06438046e-06],
##        [-1.50165447e-05,  7.50401672e-05],
##        [ 1.33435433e-04,  1.74112143e-04],
##        [-3.40909024e-05, -1.38363681e-04]]), 'db1': array([[ 7.31333221e-07],
##        [ 7.98425950e-06],
##        [ 8.15002817e-08],
##        [-5.69821155e-08]]), 'dW2': array([[2.73416304e-04, 2.96061451e-04, 7.51837363e-05, 1.01257729e-04]]), 'db2': array([[-7.22232235e-06]])}
## 
## 
## Gradapprox from gradient check
## {'dW1': array([[-6.19448937e-06, -2.06501483e-06],
##        [-1.50168766e-05,  7.50399742e-05],
##        [ 1.33435485e-04,  1.74112391e-04],
##        [-3.40910633e-05, -1.38363765e-04]]), 'db1': array([[ 7.31081862e-07],
##        [ 7.98472399e-06],
##        [ 8.16013923e-08],
##        [-5.71764858e-08]]), 'dW2': array([[2.73416290e-04, 2.96061509e-04, 7.51831930e-05, 1.01257891e-04]]), 'db2': array([[-7.22255589e-06]])}

1.1b Gradient Check – Softmax Activation – Python (Error!!)

In the code below I show, how I managed to spot a bug in your implementation

import numpy as np
exec(open("DLfunctions8.py").read())
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j


# Plot the data
#plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
layersDimensions = [2,3,3]
y1=y.reshape(-1,1).T
train_X=X.T
train_Y=y1

parameters = initializeDeepModel(layersDimensions)
#Compute forward prop
AL, caches, dropoutMat = forwardPropagationDeep(train_X, parameters, keep_prob=1, 
                                                hiddenActivationFunc="relu",outputActivationFunc="softmax")
#Compute cost
cost = computeCost(AL, train_Y, outputActivationFunc="softmax") 
print("cost=",cost)
#Compute gradients from backprop
gradients = backwardPropagationDeep(AL, train_Y, caches, dropoutMat, lambd=0, keep_prob=1, 
                                    hiddenActivationFunc="relu",outputActivationFunc="softmax")
# Note the transpose of the gradients for Softmax has to be taken
L= len(parameters)//2
print(L)
gradients['dW'+str(L)]=gradients['dW'+str(L)].T
gradients['db'+str(L)]=gradients['db'+str(L)].T
# Perform gradient check
gradient_check_n(parameters, gradients, train_X, train_Y, epsilon = 1e-7,outputActivationFunc="softmax")

cost= 1.0986187818144022
2
There is a mistake in the backward propagation! difference = 0.7100295155692544
0.7100295155692544


Gradients from backprop
{'dW1': array([[ 0.00050125,  0.00045194],
       [ 0.00096392,  0.00039641],
       [-0.00014276, -0.00045639]]), 'db1': array([[ 0.00070082],
       [-0.00224399],
       [ 0.00052305]]), 'dW2': array([[-8.40953794e-05, -9.52657769e-04, -1.10269379e-04],
       [-7.45469382e-04,  9.49795606e-04,  2.29045434e-04],
       [ 8.29564761e-04,  2.86216305e-06, -1.18776055e-04]]), 
     'db2': array([[-0.00253808],
       [-0.00505508],
       [ 0.00759315]])}


Gradapprox from gradient check
{'dW1': array([[ 0.00050125,  0.00045194],
       [ 0.00096392,  0.00039641],
       [-0.00014276, -0.00045639]]), 'db1': array([[ 0.00070082],
       [-0.00224399],
       [ 0.00052305]]), 'dW2': array([[-8.40960634e-05, -9.52657953e-04, -1.10268461e-04],
       [-7.45469242e-04,  9.49796908e-04,  2.29045671e-04],
       [ 8.29565305e-04,  2.86104473e-06, -1.18776100e-04]]), 
     'db2': array([[-8.46211989e-06],
       [-1.68487446e-05],
       [ 2.53108645e-05]])}

Gradient Check gives a high value of the difference of 0.7100295. Inspecting the Gradients and Gradapprox we can see there is a very big discrepancy in db2. After I went over my code I discovered that I my computation in the function layerActivationBackward for Softmax was

 
   # Erroneous code
   if activationFunc == 'softmax':
        dW = 1/numtraining * np.dot(A_prev,dZ)
        db = np.sum(dZ, axis=0, keepdims=True)
        dA_prev = np.dot(dZ,W)
instead of
   # Fixed code
   if activationFunc == 'softmax':
        dW = 1/numtraining * np.dot(A_prev,dZ)
        db = 1/numtraining *  np.sum(dZ, axis=0, keepdims=True)
        dA_prev = np.dot(dZ,W)

After fixing this error when I ran Gradient Check I get

1.1c Gradient Check – Softmax Activation – Python (Corrected!!)

import numpy as np
exec(open("DLfunctions8.py").read())
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j


# Plot the data
#plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
layersDimensions = [2,3,3]
y1=y.reshape(-1,1).T
train_X=X.T
train_Y=y1
#Set layer dimensions
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
AL, caches, dropoutMat = forwardPropagationDeep(train_X, parameters, keep_prob=1, 
                                                hiddenActivationFunc="relu",outputActivationFunc="softmax")
#Compute cost
cost = computeCost(AL, train_Y, outputActivationFunc="softmax") 
print("cost=",cost)
#Compute gradients from backprop
gradients = backwardPropagationDeep(AL, train_Y, caches, dropoutMat, lambd=0, keep_prob=1, 
                                    hiddenActivationFunc="relu",outputActivationFunc="softmax")
# Note the transpose of the gradients for Softmax has to be taken
L= len(parameters)//2
print(L)
gradients['dW'+str(L)]=gradients['dW'+str(L)].T
gradients['db'+str(L)]=gradients['db'+str(L)].T
#Perform gradient check
gradient_check_n(parameters, gradients, train_X, train_Y, epsilon = 1e-7,outputActivationFunc="softmax")
## cost= 1.0986193170234435
## 2
## [92mYour backward propagation works perfectly fine! difference = 5.268804859613151e-07[0m
## 5.268804859613151e-07
## 
## 
## Gradients from backprop
## {'dW1': array([[ 0.00053206,  0.00038987],
##        [ 0.00093941,  0.00038077],
##        [-0.00012177, -0.0004692 ]]), 'db1': array([[ 0.00072662],
##        [-0.00210198],
##        [ 0.00046741]]), 'dW2': array([[-7.83441270e-05, -9.70179498e-04, -1.08715815e-04],
##        [-7.70175008e-04,  9.54478237e-04,  2.27690198e-04],
##        [ 8.48519135e-04,  1.57012608e-05, -1.18974383e-04]]), 'db2': array([[-8.52190476e-06],
##        [-1.69954294e-05],
##        [ 2.55173342e-05]])}
## 
## 
## Gradapprox from gradient check
## {'dW1': array([[ 0.00053206,  0.00038987],
##        [ 0.00093941,  0.00038077],
##        [-0.00012177, -0.0004692 ]]), 'db1': array([[ 0.00072662],
##        [-0.00210198],
##        [ 0.00046741]]), 'dW2': array([[-7.83439980e-05, -9.70180603e-04, -1.08716369e-04],
##        [-7.70173925e-04,  9.54478718e-04,  2.27690089e-04],
##        [ 8.48520143e-04,  1.57018842e-05, -1.18973720e-04]]), 'db2': array([[-8.52096171e-06],
##        [-1.69964043e-05],
##        [ 2.55162558e-05]])}

1.2a Gradient Check – Sigmoid Activation – R

source("DLfunctions8.R")
z <- as.matrix(read.csv("circles.csv",header=FALSE)) 

x <- z[,1:2]
y <- z[,3]
X <- t(x)
Y <- t(y)
#Set layer dimensions
layersDimensions = c(2,5,1)
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
retvals = forwardPropagationDeep(X, parameters,keep_prob=1, hiddenActivationFunc="relu",
                                 outputActivationFunc="sigmoid")
AL <- retvals[['AL']]
caches <- retvals[['caches']]
dropoutMat <- retvals[['dropoutMat']]
#Compute cost
cost <- computeCost(AL, Y,outputActivationFunc="sigmoid",
                    numClasses=layersDimensions[length(layersDimensions)])
print(cost)
## [1] 0.6931447
# Backward propagation.
gradients = backwardPropagationDeep(AL, Y, caches, dropoutMat, lambd=0, keep_prob=1, hiddenActivationFunc="relu",
                                    outputActivationFunc="sigmoid",numClasses=layersDimensions[length(layersDimensions)])
epsilon = 1e-07
outputActivationFunc="sigmoid"
#Convert parameter list to vector
parameters_values = list_to_vector(parameters)
#Convert gradient list to vector
grad = gradients_to_vector(parameters,gradients)
num_parameters = dim(parameters_values)[1]
#Initialize
J_plus = matrix(rep(0,num_parameters),
                nrow=num_parameters,ncol=1)
J_minus = matrix(rep(0,num_parameters),
                 nrow=num_parameters,ncol=1)
gradapprox = matrix(rep(0,num_parameters),
                    nrow=num_parameters,ncol=1)

# Compute gradapprox
for(i in 1:num_parameters){
    # Compute J_plus[i]. 
    thetaplus = parameters_values                                   
    thetaplus[i][1] = thetaplus[i][1] + epsilon                                 
    retvals = forwardPropagationDeep(X, vector_to_list(parameters,thetaplus), keep_prob=1, 
                                                hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)
    
    AL <- retvals[['AL']]
    J_plus[i] = computeCost(AL, Y, outputActivationFunc=outputActivationFunc) 


   # Compute J_minus[i]. 
    thetaminus = parameters_values                                     
    thetaminus[i][1] = thetaminus[i][1] - epsilon                                     
    retvals  = forwardPropagationDeep(X, vector_to_list(parameters,thetaminus), keep_prob=1, 
                                                 hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)     
    AL <- retvals[['AL']]
    J_minus[i] = computeCost(AL, Y, outputActivationFunc=outputActivationFunc)                            

    # Compute gradapprox[i]   
    gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon)
}
# Compare gradapprox to backward propagation gradients by computing difference.
#Compute L2Norm
numerator = L2NormVec(grad-gradapprox)                                           
denominator = L2NormVec(grad) +  L2NormVec(gradapprox)                                         
difference =  numerator/denominator 
if(difference > 1e-5){
    cat("There is a mistake, the difference is too high",difference)
} else{
    cat("The implementations works perfectly", difference)
}
## The implementations works perfectly 1.279911e-06
# This can be used to check
print("Gradients from backprop")
## [1] "Gradients from backprop"
vector_to_list2(parameters,grad)
## $dW1
##               [,1]          [,2]
## [1,] -7.641588e-05 -3.427989e-07
## [2,] -9.049683e-06  6.906304e-05
## [3,]  3.401039e-06 -1.503914e-04
## [4,]  1.535226e-04 -1.686402e-04
## [5,] -6.029292e-05 -2.715648e-04
## 
## $db1
##               [,1]
## [1,]  6.930318e-06
## [2,] -3.283117e-05
## [3,]  1.310647e-05
## [4,] -3.454308e-05
## [5,] -2.331729e-08
## 
## $dW2
##              [,1]         [,2]         [,3]        [,4]         [,5]
## [1,] 0.0001612356 0.0001113475 0.0002435824 0.000362149 2.874116e-05
## 
## $db2
##              [,1]
## [1,] -1.16364e-05
print("Grad approx from gradient check")
## [1] "Grad approx from gradient check"
vector_to_list2(parameters,gradapprox)
## $dW1
##               [,1]          [,2]
## [1,] -7.641554e-05 -3.430589e-07
## [2,] -9.049428e-06  6.906253e-05
## [3,]  3.401168e-06 -1.503919e-04
## [4,]  1.535228e-04 -1.686401e-04
## [5,] -6.029288e-05 -2.715650e-04
## 
## $db1
##               [,1]
## [1,]  6.930012e-06
## [2,] -3.283096e-05
## [3,]  1.310618e-05
## [4,] -3.454237e-05
## [5,] -2.275957e-08
## 
## $dW2
##              [,1]         [,2]         [,3]         [,4]        [,5]
## [1,] 0.0001612355 0.0001113476 0.0002435829 0.0003621486 2.87409e-05
## 
## $db2
##              [,1]
## [1,] -1.16368e-05

1.2b Gradient Check – Softmax Activation – R

source("DLfunctions8.R")
Z <- as.matrix(read.csv("spiral.csv",header=FALSE)) 

# Setup the data
X <- Z[,1:2]
y <- Z[,3]
X <- t(X)
Y <- t(y)
layersDimensions = c(2, 3, 3)
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
retvals = forwardPropagationDeep(X, parameters,keep_prob=1, hiddenActivationFunc="relu",
                                 outputActivationFunc="softmax")
AL <- retvals[['AL']]
caches <- retvals[['caches']]
dropoutMat <- retvals[['dropoutMat']]
#Compute cost
cost <- computeCost(AL, Y,outputActivationFunc="softmax",
                    numClasses=layersDimensions[length(layersDimensions)])
print(cost)
## [1] 1.098618
# Backward propagation.
gradients = backwardPropagationDeep(AL, Y, caches, dropoutMat, lambd=0, keep_prob=1, hiddenActivationFunc="relu",
                                    outputActivationFunc="softmax",numClasses=layersDimensions[length(layersDimensions)])
# Need to take transpose of the last layer for Softmax
L=length(parameters)/2
gradients[[paste('dW',L,sep="")]]=t(gradients[[paste('dW',L,sep="")]])
gradients[[paste('db',L,sep="")]]=t(gradients[[paste('db',L,sep="")]])
#Perform gradient check
gradient_check_n(parameters, gradients, X, Y, 
                 epsilon = 1e-7,outputActivationFunc="softmax")
## The implementations works perfectly 3.903011e-07[1] "Gradients from backprop"
## $dW1
##              [,1]          [,2]
## [1,] 0.0007962367 -0.0001907606
## [2,] 0.0004444254  0.0010354412
## [3,] 0.0003078611  0.0007591255
## 
## $db1
##               [,1]
## [1,] -0.0017305136
## [2,]  0.0005393734
## [3,]  0.0012484550
## 
## $dW2
##               [,1]          [,2]          [,3]
## [1,] -3.515627e-04  7.487283e-04 -3.971656e-04
## [2,] -6.381521e-05 -1.257328e-06  6.507254e-05
## [3,] -1.719479e-04 -4.857264e-04  6.576743e-04
## 
## $db2
##               [,1]
## [1,] -5.536383e-06
## [2,] -1.824656e-05
## [3,]  2.378295e-05
## 
## [1] "Grad approx from gradient check"
## $dW1
##              [,1]          [,2]
## [1,] 0.0007962364 -0.0001907607
## [2,] 0.0004444256  0.0010354406
## [3,] 0.0003078615  0.0007591250
## 
## $db1
##               [,1]
## [1,] -0.0017305135
## [2,]  0.0005393741
## [3,]  0.0012484547
## 
## $dW2
##               [,1]          [,2]          [,3]
## [1,] -3.515632e-04  7.487277e-04 -3.971656e-04
## [2,] -6.381451e-05 -1.257883e-06  6.507239e-05
## [3,] -1.719469e-04 -4.857270e-04  6.576739e-04
## 
## $db2
##               [,1]
## [1,] -5.536682e-06
## [2,] -1.824652e-05
## [3,]  2.378209e-05

1.3a Gradient Check – Sigmoid Activation – Octave

source("DL8functions.m")
################## Circles
data=csvread("circles.csv");

X=data(:,1:2);
Y=data(:,3);
#Set layer dimensions
layersDimensions = [2 5  1]; #tanh=-0.5(ok), #relu=0.1 best!
[weights biases] = initializeDeepModel(layersDimensions);
#Perform forward prop
[AL forward_caches activation_caches droputMat] = forwardPropagationDeep(X', weights, biases,keep_prob=1, 
                 hiddenActivationFunc="relu", outputActivationFunc="sigmoid");
#Compute cost
cost = computeCost(AL, Y',outputActivationFunc=outputActivationFunc,numClasses=layersDimensions(size(layersDimensions)(2)));
disp(cost);
#Compute gradients from cost
[gradsDA gradsDW gradsDB] = backwardPropagationDeep(AL, Y', activation_caches,forward_caches, droputMat, lambd=0, keep_prob=1, 
                                 hiddenActivationFunc="relu", outputActivationFunc="sigmoid",
                                 numClasses=layersDimensions(size(layersDimensions)(2)));
epsilon = 1e-07;
outputActivationFunc="sigmoid";
# Convert paramter cell array to vector
parameters_values = cellArray_to_vector(weights, biases);
#Convert gradient cell array to vector
grad = gradients_to_vector(gradsDW,gradsDB);
num_parameters = size(parameters_values)(1);
#Initialize
J_plus = zeros(num_parameters, 1);
J_minus = zeros(num_parameters, 1);
gradapprox = zeros(num_parameters, 1);
# Compute gradapprox
for i = 1:num_parameters
    # Compute J_plus[i]. 
    thetaplus = parameters_values;                                   
    thetaplus(i,1) = thetaplus(i,1) + epsilon;      
    [weights1 biases1] =vector_to_cellArray(weights, biases,thetaplus);    
    [AL forward_caches activation_caches droputMat] = forwardPropagationDeep(X', weights1, biases1, keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc);
    J_plus(i) = computeCost(AL, Y', outputActivationFunc=outputActivationFunc); 
        
    # Compute J_minus[i]. 
    thetaminus = parameters_values;                                  
    thetaminus(i,1) = thetaminus(i,1) - epsilon ;    
    [weights1 biases1] = vector_to_cellArray(weights, biases,thetaminus);    
    [AL forward_caches activation_caches droputMat]  = forwardPropagationDeep(X',weights1, biases1, keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc);     
    J_minus(i) = computeCost(AL, Y', outputActivationFunc=outputActivationFunc);                            
      
    # Compute gradapprox[i]   
    gradapprox(i) = (J_plus(i) - J_minus(i))/(2*epsilon);

endfor

#Compute L2Norm
numerator = L2NormVec(grad-gradapprox);                                           
denominator = L2NormVec(grad) +  L2NormVec(gradapprox);                                         
difference =  numerator/denominator;
disp(difference);
#Check difference
if difference > 1e-04
   printf("There is a mistake in the implementation ");
   disp(difference);
else
   printf("The implementation works perfectly");
      disp(difference);
endif
[weights1 biases1] = vector_to_cellArray(weights, biases,grad);  
printf("Gradients from back propagation"); 
disp(weights1);
disp(biases1); 
[weights2 biases2] = vector_to_cellArray(weights, biases,gradapprox); 
printf("Gradients from gradient check");
disp(weights2);
disp(biases2); 
0.69315
1.4893e-005
The implementation works perfectly 1.4893e-005
Gradients from back propagation
{
[1,1] =
5.0349e-005 2.1323e-005
8.8632e-007 1.8231e-006
9.3784e-005 1.0057e-004
1.0875e-004 -1.9529e-007
5.4502e-005 3.2721e-005
[1,2] =
1.0567e-005 6.0615e-005 4.6004e-005 1.3977e-004 1.0405e-004
}
{
[1,1] =
-1.8716e-005
1.1309e-009
4.7686e-005
1.2051e-005
-1.4612e-005
[1,2] = 9.5808e-006
}
Gradients from gradient check
{
[1,1] =
5.0348e-005 2.1320e-005
8.8485e-007 1.8219e-006
9.3784e-005 1.0057e-004
1.0875e-004 -1.9762e-007
5.4502e-005 3.2723e-005
[1,2] =
[1,2] =
1.0565e-005 6.0614e-005 4.6007e-005 1.3977e-004 1.0405e-004
}
{
[1,1] =
-1.8713e-005
1.1102e-009
4.7687e-005
1.2048e-005
-1.4609e-005
[1,2] = 9.5790e-006
}

1.3b Gradient Check – Softmax Activation – Octave

source("DL8functions.m")
data=csvread("spiral.csv");

# Setup the data
X=data(:,1:2);
Y=data(:,3);
# Set the layer dimensions
layersDimensions = [2 3  3]; 
[weights biases] = initializeDeepModel(layersDimensions);
# Run forward prop
[AL forward_caches activation_caches droputMat] = forwardPropagationDeep(X', weights, biases,keep_prob=1, 
                 hiddenActivationFunc="relu", outputActivationFunc="softmax");
# Compute cost
cost = computeCost(AL, Y',outputActivationFunc=outputActivationFunc,numClasses=layersDimensions(size(layersDimensions)(2)));
disp(cost);
# Perform backward prop
[gradsDA gradsDW gradsDB] = backwardPropagationDeep(AL, Y', activation_caches,forward_caches, droputMat, lambd=0, keep_prob=1, 
                                 hiddenActivationFunc="relu", outputActivationFunc="softmax",
                                 numClasses=layersDimensions(size(layersDimensions)(2)));

#Take transpose of last layer for Softmax                                
L=size(weights)(2);
gradsDW{L}= gradsDW{L}';
gradsDB{L}= gradsDB{L}';   
#Perform gradient check
difference= gradient_check_n(weights, biases, gradsDW,gradsDB, X, Y, epsilon = 1e-7,
                  outputActivationFunc="softmax",numClasses=layersDimensions(size(layersDimensions)(2)));
 1.0986
The implementation works perfectly  2.0021e-005
Gradients from back propagation
{
  [1,1] =
    -7.1590e-005  4.1375e-005
    -1.9494e-004  -5.2014e-005
    -1.4554e-004  5.1699e-005
  [1,2] =
    3.3129e-004  1.9806e-004  -1.5662e-005
    -4.9692e-004  -3.7756e-004  -8.2318e-005
    1.6562e-004  1.7950e-004  9.7980e-005
}
{
  [1,1] =
    -3.0856e-005
    -3.3321e-004
    -3.8197e-004
  [1,2] =
    1.2046e-006
    2.9259e-007
    -1.4972e-006
}
Gradients from gradient check
{
  [1,1] =
    -7.1586e-005  4.1377e-005
    -1.9494e-004  -5.2013e-005
    -1.4554e-004  5.1695e-005
    3.3129e-004  1.9806e-004  -1.5664e-005
    -4.9692e-004  -3.7756e-004  -8.2316e-005
    1.6562e-004  1.7950e-004  9.7979e-005
}
{
  [1,1] =
    -3.0852e-005
    -3.3321e-004
    -3.8197e-004
  [1,2] =
    1.1902e-006
    2.8200e-007
    -1.4644e-006
}

2.1 Tip for tuning hyperparameters

Deep Learning Networks come with a large number of hyper parameters which require tuning. The hyper parameters are

1. \alpha -learning rate
2. Number of layers
3. Number of hidden units
4. Number of iterations
5. Momentum – \beta – 0.9
6. RMSProp – \beta_{1} – 0.9
7. Adam – \beta_{1},\beta_{2} and \epsilon
8. learning rate decay
9. mini batch size
10. Initialization method – He, Xavier
11. Regularization

– Among the above the most critical is learning rate \alpha . Rather than just trying out random values, it may help to try out values on a logarithmic scale. So we could try out values -0.01,0.1,1.0,10 etc. If we find that the cost is between 0.01 and 0.1 we could use a technique similar to binary search or bisection, so we can try 0.01, 0.05. If we need to be bigger than 0.01 and 0.05 we could try 0.25  and then keep halving the distance etc.
– The performance of Momentum and RMSProp are very good and work well with values 0.9. Even with this, it is better to try out values of 1-\beta in the logarithmic range. So 1-\beta could 0.001,0.01,0.1 and hence \beta would be 0.999,0.99 or 0.9
– Increasing the number of hidden units or number of hidden layers need to be done gradually. I have noticed that increasing number of hidden layers heavily does not improve performance and sometimes degrades it.
– Sometimes, I tend to increase the number of iterations if I think I see a steady decrease in the cost for a certain learning rate
– It may also help to add learning rate decay if you see there is an oscillation while it decreases.
– Xavier and He initializations also help in a fast convergence and are worth trying out.

3.1 Final thoughts

As I come to a close in this Deep Learning Series from first principles in Python, R and Octave, I must admit that I learnt a lot in the process.

* Building a L-layer, vectorized Deep Learning Network in Python, R and Octave was extremely challenging but very rewarding
* One benefit of building vectorized versions in Python, R and Octave was that I was looking at each function that I was implementing thrice, and hence I was able to fix any bugs in any of the languages
* In addition since I built the generic L-Layer DL network with all the bells and whistles, layer by layer I further had an opportunity to look at all the functions in each successive post.
* Each language has its advantages and disadvantages. From the performance perspective I think Python is the best, followed by Octave and then R
* Interesting, I noticed that even if small bugs creep into your implementation, the DL network does learn and does generate a valid set of weights and biases, however this may not be an optimum solution. In one case of an inadvertent bug, I was not updating the weights in the final layer of the DL network. Yet, using all the other layers, the DL network was able to come with a reasonable solution (maybe like random dropout, remaining units can still learn the data!)
* Having said that, the Gradient Check method discussed and implemented in this post can be very useful in ironing out bugs.
Feel free to clone/download the code from Github at DeepLearning-Part8

Conclusion

These last couple of months when I was writing the posts and the also churning up the code in Python, R and Octave were  very hectic. There have been times when I found that implementations of some function to be extremely demanding and I almost felt like giving up. Other times, I have spent quite some time on an intractable DL network which would not respond to changes in hyper-parameters. All in all, it was a great learning experience. I would suggest that you start from my first post Deep Learning from first principles in Python, R and Octave-Part 1 and work your way up. Feel free to take the code apart and try out things. That is the only way you will learn.

Hope you had as much fun as I had. Stay tuned. I will be back!!!

Also see
1. My book ‘Practical Machine Learning with R and Python’ on Amazon
2. Revisiting crimes against women in India
3. Literacy in India – A deepR dive
4. Sixer – R package cricketr’s new Shiny avatar
5. Bend it like Bluemix, MongoDB using Auto-scale – Part 1!
6. Computer Vision: Ramblings on derivatives, histograms and contours
7. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
8. A closer look at “Robot Horse on a Trot” in Android

To see all post click Index of Posts

Deep Learning from first principles in Python, R and Octave – Part 7

Artificial Intelligence is the new electricity. – Prof Andrew Ng

Most of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. We know how to make the icing and the cherry, but we don’t know how to make the cake. We need to solve the unsupervised learning problem before we can even think of getting to true AI.  – Yann LeCun, March 14, 2016 (Facebook)

Introduction

In this post ‘Deep Learning from first principles with Python, R and Octave-Part 7’, I implement optimization methods used in Stochastic Gradient Descent (SGD) to speed up the convergence. Specifically I discuss and implement the following gradient descent optimization techniques

a.Vanilla Stochastic Gradient Descent
b.Learning rate decay
c. Momentum method
d. RMSProp
e. Adaptive Moment Estimation (Adam)

This post, further enhances my generic  L-Layer Deep Learning Network implementations in  vectorized Python, R and Octave to also include the Stochastic Gradient Descent optimization techniques. You can clone/download the code from Github at DeepLearning-Part7

You can view my video  presentation on Gradient Descent Optimization in Neural Networks 7

Incidentally, a good discussion of the various optimizations methods used in Stochastic Gradient Optimization techniques can be seen at Sebastian Ruder’s blog

Note: In the vectorized Python, R and Octave implementations below only a  1024 random training samples were used. This was to reduce the computation time. You are free to use the entire data set (60000 training data) for the computation.

This post is largely based of on Prof Andrew Ng’s Deep Learning Specialization.  All the above optimization techniques for Stochastic Gradient Descent are based on the technique of exponentially weighted average method. So for example if we had some time series data \theta_{1},\theta_{2},\theta_{3}... \theta_{t} then we we can represent the exponentially average value at time ‘t’ as a sequence of the the previous value v_{t-1} and \theta_{t} as shown below
v_{t} = \beta v_{t-1} + (1-\beta)\theta_{t}

Here v_{t} represent the average of the data set over \frac {1}{1-\beta}  By choosing different values of \beta, we can average over a larger or smaller number of the data points.
We can write the equations as follows
v_{t} = \beta v_{t-1} + (1-\beta)\theta_{t}
v_{t-1} = \beta v_{t-2} + (1-\beta)\theta_{t-1}
v_{t-2} = \beta v_{t-3} + (1-\beta)\theta_{t-2}
and
v_{t-k} = \beta v_{t-(k+1))} + (1-\beta)\theta_{t-k}
By substitution we have
v_{t} = (1-\beta)\theta_{t} + \beta v_{t-1}
v_{t} = (1-\beta)\theta_{t} + \beta ((1-\beta)\theta_{t-1}) + \beta v_{t-2}
v_{t} = (1-\beta)\theta_{t} + \beta ((1-\beta)\theta_{t-1}) + \beta ((1-\beta)\theta_{t-2}+ \beta v_{t-3} )

Hence it can be seen that the v_{t} is the weighted sum over the previous values \theta_{k}, which is an exponentially decaying function.

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

You may also like my companion book “Practical Machine Learning with R and Python- Machine Learning in stereo” available in Amazon in paperback($9.99) and Kindle($6.99) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.

1.1a. Stochastic Gradient Descent (Vanilla) – Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())

# Read the training data
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of 1024 random numbers.
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
# Set the layer dimensions  
layersDimensions=[784, 15,9,10] 
# Perform SGD with regular gradient descent
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",learningRate = 0.01 ,
                                   optimizer="gd",
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig1.png")

1.1b. Stochastic Gradient Descent (Vanilla) – R

source("mnist.R")
source("DLfunctions7.R")
#Load and read MNIST data
load_mnist() 
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
# Set layer dimensions
layersDimensions=c(784, 15,9, 10) 
# Perform SGD with regular gradient descent
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                            hiddenActivationFunc='tanh',
                            outputActivationFunc="softmax",
                            learningRate = 0.05,
                            optimizer="gd",
                            mini_batch_size = 512, 
                            num_epochs = 5000, 
                            print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs no of epochs") + xlab("No of epochss") + ylab("Cost")

1.1c. Stochastic Gradient Descent (Vanilla) – Octave

source("DL7functions.m")
#Load and read MNIST
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 1024
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
# Perform SGD with regular gradient descent
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
 hiddenActivationFunc='relu', 
 outputActivationFunc="softmax",
 learningRate = 0.005,
 lrDecay=true, 
 decayRate=1,
 lambd=0,
 keep_prob=1,
 optimizer="gd",
 beta=0.9,
 beta1=0.9,
 beta2=0.999,
 epsilon=10^-8,
 mini_batch_size = 512, 
 num_epochs = 5000);

plotCostVsEpochs(5000,costs);

2.1. Stochastic Gradient Descent with Learning rate decay

Since in Stochastic Gradient Descent,with  each epoch, we use slight different samples, the gradient descent algorithm, oscillates across the ravines and wanders around the minima, when a fixed learning rate is used. In this technique of ‘learning rate decay’ the learning rate is slowly decreased with the number of epochs and becomes smaller and smaller, so that gradient descent can take smaller steps towards the minima.

There are several techniques employed in learning rate decay

a) Exponential decay: \alpha = decayRate^{epochNum} *\alpha_{0}
b) 1/t decay : \alpha = \frac{\alpha_{0}}{1 + decayRate*epochNum}
c) \alpha = \frac {decayRate}{\sqrt(epochNum)}*\alpha_{0}

In my implementation I have used the ‘exponential decay’. The code snippet for Python is shown below

if lrDecay == True:
   learningRate = np.power(decayRate,(num_epochs/1000)) * learningRate

2.1a. Stochastic Gradient Descent with Learning rate decay – Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())

# Read the MNIST data
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
# Set layer dimensions
layersDimensions=[784, 15,9,10] 
# Perform SGD with learning rate decay
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",
                                   learningRate = 0.01 , lrDecay=True, decayRate=0.9999,
                                   optimizer="gd",
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig2.png")

2.1b. Stochastic Gradient Descent with Learning rate decay – R

source("mnist.R")
source("DLfunctions7.R")
# Read and load MNIST
load_mnist()
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
# Set layer dimensions
layersDimensions=c(784, 15,9, 10) 
# Perform SGD with Learning rate decay
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                                  hiddenActivationFunc='tanh',
                                  outputActivationFunc="softmax",
                                  learningRate = 0.05,
                                  lrDecay=TRUE,
                                  decayRate=0.9999,
                                  optimizer="gd",
                                  mini_batch_size = 512, 
                                  num_epochs = 5000, 
                                  print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")

2.1c. Stochastic Gradient Descent with Learning rate decay – Octave

source("DL7functions.m")
#Load and read MNIST
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 1024
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
# Perform SGD with regular Learning rate decay
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
 hiddenActivationFunc='relu', 
 outputActivationFunc="softmax",
 learningRate = 0.01,
 lrDecay=true, 
 decayRate=0.999,
 lambd=0,
 keep_prob=1,
 optimizer="gd",
 beta=0.9,
 beta1=0.9,
 beta2=0.999,
 epsilon=10^-8,
 mini_batch_size = 512, 
 num_epochs = 5000);
plotCostVsEpochs(5000,costs)

3.1. Stochastic Gradient Descent with Momentum

Stochastic Gradient Descent with Momentum uses the exponentially weighted average method discusses above and more generally moves faster into the ravine than across it. The equations are
v_{dW}^l = \beta v_{dW}^l + (1-\beta)dW^{l}
v_{db}^l = \beta v_{db}^l + (1-\beta)db^{l}
W^{l} = W^{l} - \alpha v_{dW}^l
b^{l} = b^{l} - \alpha v_{db}^l where
v_{dW} and v_{db} are the momentum terms which are exponentially weighted with the corresponding gradients ‘dW’ and ‘db’ at the corresponding layer ‘l’ The code snippet for Stochastic Gradient Descent with momentum in R is shown below

# Perform Gradient Descent with momentum
# Input : Weights and biases
#       : beta
#       : gradients
#       : learning rate
#       : outputActivationFunc - Activation function at hidden layer sigmoid/softmax
#output : Updated weights after 1 iteration
gradientDescentWithMomentum  <- function(parameters, gradients,v, beta, learningRate,outputActivationFunc="sigmoid"){

    L = length(parameters)/2 # number of layers in the neural network    
    # Update rule for each parameter. Use a for loop.
    for(l in 1:(L-1)){
        # Compute velocities
        # v['dWk'] = beta *v['dWk'] + (1-beta)*dWk
        v[[paste("dW",l, sep="")]] = beta*v[[paste("dW",l, sep="")]] + 
                   (1-beta) * gradients[[paste('dW',l,sep="")]]
        v[[paste("db",l, sep="")]] = beta*v[[paste("db",l, sep="")]] + 
            (1-beta) * gradients[[paste('db',l,sep="")]]
        
        parameters[[paste("W",l,sep="")]] = parameters[[paste("W",l,sep="")]] -
            learningRate* v[[paste("dW",l, sep="")]] 
        parameters[[paste("b",l,sep="")]] = parameters[[paste("b",l,sep="")]] -
            learningRate* v[[paste("db",l, sep="")]] 
    }    
    # Compute for the Lth layer
    if(outputActivationFunc=="sigmoid"){
        v[[paste("dW",L, sep="")]] = beta*v[[paste("dW",L, sep="")]] + 
            (1-beta) * gradients[[paste('dW',L,sep="")]]
        v[[paste("db",L, sep="")]] = beta*v[[paste("db",L, sep="")]] + 
            (1-beta) * gradients[[paste('db',L,sep="")]]
        
        parameters[[paste("W",L,sep="")]] = parameters[[paste("W",L,sep="")]] -
            learningRate* v[[paste("dW",l, sep="")]]  
        parameters[[paste("b",L,sep="")]] = parameters[[paste("b",L,sep="")]] -
            learningRate* v[[paste("db",l, sep="")]]
        
    }else if (outputActivationFunc=="softmax"){
        v[[paste("dW",L, sep="")]] = beta*v[[paste("dW",L, sep="")]] + 
            (1-beta) * t(gradients[[paste('dW',L,sep="")]])
        v[[paste("db",L, sep="")]] = beta*v[[paste("db",L, sep="")]] + 
            (1-beta) * t(gradients[[paste('db',L,sep="")]])       
        parameters[[paste("W",L,sep="")]] = parameters[[paste("W",L,sep="")]] -
            learningRate* t(gradients[[paste("dW",L,sep="")]])
        parameters[[paste("b",L,sep="")]] = parameters[[paste("b",L,sep="")]] -
            learningRate* t(gradients[[paste("db",L,sep="")]])
    }
    return(parameters)
}

3.1a. Stochastic Gradient Descent with Momentum- Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
# Read and load data
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
layersDimensions=[784, 15,9,10] 
# Perform SGD with momentum
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",learningRate = 0.01 ,
                                   optimizer="momentum", beta=0.9,
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig3.png")

3.1b. Stochastic Gradient Descent with Momentum- R

source("mnist.R")
source("DLfunctions7.R")
load_mnist()
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
layersDimensions=c(784, 15,9, 10) 
# Perform SGD with momentum
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                                  hiddenActivationFunc='tanh',
                                  outputActivationFunc="softmax",
                                  learningRate = 0.05,
                                  optimizer="momentum",
                                  beta=0.9,
                                  mini_batch_size = 512, 
                                  num_epochs = 5000, 
                                  print_cost = True)

#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")

3.1c. Stochastic Gradient Descent with Momentum- Octave

source("DL7functions.m")
#Load and read MNIST
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 60K
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
# Perform SGD with Momentum
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
 hiddenActivationFunc='relu', 
 outputActivationFunc="softmax",
 learningRate = 0.01,
 lrDecay=false, 
 decayRate=1,
 lambd=0,
 keep_prob=1,
 optimizer="momentum",
 beta=0.9,
 beta1=0.9,
 beta2=0.999,
 epsilon=10^-8,
 mini_batch_size = 512, 
 num_epochs = 5000);

plotCostVsEpochs(5000,costs)

4.1. Stochastic Gradient Descent with RMSProp

Stochastic Gradient Descent with RMSProp tries to move faster towards the minima while dampening the oscillations across the ravine.
The equations are

s_{dW}^l = \beta_{1} s_{dW}^l + (1-\beta_{1})(dW^{l})^{2}
s_{db}^l = \beta_{1} s_{db}^l + (1-\beta_{1})(db^{l})^2
W^{l} = W^{l} - \frac {\alpha s_{dW}^l}{\sqrt (s_{dW}^l + \epsilon) }
b^{l} = b^{l} - \frac {\alpha s_{db}^l}{\sqrt (s_{db}^l + \epsilon) }
where s_{dW} and s_{db} are the RMSProp terms which are exponentially weighted with the corresponding gradients ‘dW’ and ‘db’ at the corresponding layer ‘l’

The code snippet in Octave is shown below

# Update parameters with RMSProp
# Input : parameters
#       : gradients
#       : s
#       : beta
#       : learningRate
#       : 
#output : Updated parameters RMSProp
function [weights biases] = gradientDescentWithRMSProp(weights, biases,gradsDW,gradsDB, sdW, sdB, beta1, epsilon, learningRate,outputActivationFunc="sigmoid")
    L = size(weights)(2); # number of layers in the neural network
    # Update rule for each parameter. 
    for l=1:(L-1)
        sdW{l} =  beta1*sdW{l} + (1 -beta1) * gradsDW{l} .* gradsDW{l};
        sdB{l} =  beta1*sdB{l} + (1 -beta1) * gradsDB{l} .* gradsDB{l};
        weights{l} = weights{l} - learningRate* gradsDW{l} ./ sqrt(sdW{l} + epsilon); 
        biases{l} = biases{l} -  learningRate* gradsDB{l} ./ sqrt(sdB{l} + epsilon);
    endfor
  
    if (strcmp(outputActivationFunc,"sigmoid"))
        sdW{L} =  beta1*sdW{L} + (1 -beta1) * gradsDW{L} .* gradsDW{L};
        sdB{L} =  beta1*sdB{L} + (1 -beta1) * gradsDB{L} .* gradsDB{L};
        weights{L} = weights{L} -learningRate* gradsDW{L} ./ sqrt(sdW{L} +epsilon); 
        biases{L} = biases{L} -learningRate* gradsDB{L} ./ sqrt(sdB{L} + epsilon);
     elseif (strcmp(outputActivationFunc,"softmax"))
        sdW{L} =  beta1*sdW{L} + (1 -beta1) * gradsDW{L}' .* gradsDW{L}';
        sdB{L} =  beta1*sdB{L} + (1 -beta1) * gradsDB{L}' .* gradsDB{L}';
        weights{L} = weights{L} -learningRate* gradsDW{L}' ./ sqrt(sdW{L} +epsilon); 
        biases{L} = biases{L} -learningRate* gradsDB{L}' ./ sqrt(sdB{L} + epsilon);
     endif   
end

4.1a. Stochastic Gradient Descent with RMSProp – Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())

# Read and load MNIST
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

print("X1=",X1.shape)
print("y1=",Y1.shape)

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
  
layersDimensions=[784, 15,9,10] 
# Use SGD with RMSProp
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",learningRate = 0.01 ,
                                   optimizer="rmsprop", beta1=0.7, epsilon=1e-8,
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig4.png")

4.1b. Stochastic Gradient Descent with RMSProp – R

source("mnist.R")
source("DLfunctions7.R")
load_mnist()
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
layersDimensions=c(784, 15,9, 10) 
#Perform SGD with RMSProp
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                                  hiddenActivationFunc='tanh',
                                  outputActivationFunc="softmax",
                                  learningRate = 0.001,
                                  optimizer="rmsprop",
                                  beta1=0.9,
                                  epsilon=10^-8,
                                  mini_batch_size = 512, 
                                  num_epochs = 5000 , 
                                  print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")

4.1c. Stochastic Gradient Descent with RMSProp – Octave

source("DL7functions.m")
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 1024
permutation = randperm(1024);

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
#Perform SGD with RMSProp
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
 hiddenActivationFunc='relu', 
 outputActivationFunc="softmax",
 learningRate = 0.005,
 lrDecay=false, 
 decayRate=1,
 lambd=0,
 keep_prob=1,
 optimizer="rmsprop",
 beta=0.9,
 beta1=0.9,
 beta2=0.999,
 epsilon=1,
 mini_batch_size = 512, 
 num_epochs = 5000);
plotCostVsEpochs(5000,costs)

5.1. Stochastic Gradient Descent with Adam

Adaptive Moment Estimate is a combination of the momentum (1st moment) and RMSProp(2nd moment). The equations for Adam are below
v_{dW}^l = \beta_{1} v_{dW}^l + (1-\beta_{1})dW^{l}
v_{db}^l = \beta_{1} v_{db}^l + (1-\beta_{1})db^{l}
The bias corrections for the 1st moment
vCorrected_{dW}^l= \frac {v_{dW}^l}{1 - \beta_{1}^{t}}
vCorrected_{db}^l= \frac {v_{db}^l}{1 - \beta_{1}^{t}}

Similarly the moving average for the 2nd moment- RMSProp
s_{dW}^l = \beta_{2} s_{dW}^l + (1-\beta_{2})(dW^{l})^2
s_{db}^l = \beta_{2} s_{db}^l + (1-\beta_{2})(db^{l})^2
The bias corrections for the 2nd moment
sCorrected_{dW}^l= \frac {s_{dW}^l}{1 - \beta_{2}^{t}}
sCorrected_{db}^l= \frac {s_{db}^l}{1 - \beta_{2}^{t}}

The Adam Gradient Descent is given by
W^{l} = W^{l} - \frac {\alpha vCorrected_{dW}^l}{\sqrt (s_{dW}^l + \epsilon) }
b^{l} = b^{l} - \frac {\alpha vCorrected_{db}^l}{\sqrt (s_{db}^l + \epsilon) }
The code snippet of Adam in R is included below

# Perform Gradient Descent with Adam
# Input : Weights and biases
#       : beta1
#       : epsilon
#       : gradients
#       : learning rate
#       : outputActivationFunc - Activation function at hidden layer sigmoid/softmax
#output : Updated weights after 1 iteration
gradientDescentWithAdam  <- function(parameters, gradients,v, s, t, 
                        beta1=0.9, beta2=0.999, epsilon=10^-8, learningRate=0.1,outputActivationFunc="sigmoid"){
    
    L = length(parameters)/2 # number of layers in the neural network
    v_corrected <- list()
    s_corrected <- list()
    # Update rule for each parameter. Use a for loop.
    for(l in 1:(L-1)){
        # v['dWk'] = beta *v['dWk'] + (1-beta)*dWk
        v[[paste("dW",l, sep="")]] = beta1*v[[paste("dW",l, sep="")]] + 
            (1-beta1) * gradients[[paste('dW',l,sep="")]]
        v[[paste("db",l, sep="")]] = beta1*v[[paste("db",l, sep="")]] + 
            (1-beta1) * gradients[[paste('db',l,sep="")]]
        
        
        # Compute bias-corrected first moment estimate. 
        v_corrected[[paste("dW",l, sep="")]] = v[[paste("dW",l, sep="")]]/(1-beta1^t)
        v_corrected[[paste("db",l, sep="")]] = v[[paste("db",l, sep="")]]/(1-beta1^t)
               
        # Element wise multiply of gradients
        s[[paste("dW",l, sep="")]] = beta2*s[[paste("dW",l, sep="")]] + 
            (1-beta2) * gradients[[paste('dW',l,sep="")]] * gradients[[paste('dW',l,sep="")]]
        s[[paste("db",l, sep="")]] = beta2*s[[paste("db",l, sep="")]] + 
            (1-beta2) * gradients[[paste('db',l,sep="")]] * gradients[[paste('db',l,sep="")]]
        
        # Compute bias-corrected second moment estimate. 
        s_corrected[[paste("dW",l, sep="")]] = s[[paste("dW",l, sep="")]]/(1-beta2^t)
        s_corrected[[paste("db",l, sep="")]] = s[[paste("db",l, sep="")]]/(1-beta2^t)
        
        # Update parameters. 
        d1=sqrt(s_corrected[[paste("dW",l, sep="")]]+epsilon)
        d2=sqrt(s_corrected[[paste("db",l, sep="")]]+epsilon)        
                
        parameters[[paste("W",l,sep="")]] = parameters[[paste("W",l,sep="")]] -
            learningRate * v_corrected[[paste("dW",l, sep="")]]/d1
        parameters[[paste("b",l,sep="")]] = parameters[[paste("b",l,sep="")]] -
            learningRate*v_corrected[[paste("db",l, sep="")]]/d2
    }    
    # Compute for the Lth layer
    if(outputActivationFunc=="sigmoid"){
        v[[paste("dW",L, sep="")]] = beta1*v[[paste("dW",L, sep="")]] + 
            (1-beta1) * gradients[[paste('dW',L,sep="")]]
        v[[paste("db",L, sep="")]] = beta1*v[[paste("db",L, sep="")]] + 
            (1-beta1) * gradients[[paste('db',L,sep="")]]
                
        # Compute bias-corrected first moment estimate. 
        v_corrected[[paste("dW",L, sep="")]] = v[[paste("dW",L, sep="")]]/(1-beta1^t)
        v_corrected[[paste("db",L, sep="")]] = v[[paste("db",L, sep="")]]/(1-beta1^t)
                
        # Element wise multiply of gradients
        s[[paste("dW",L, sep="")]] = beta2*s[[paste("dW",L, sep="")]] + 
            (1-beta2) * gradients[[paste('dW',L,sep="")]] * gradients[[paste('dW',L,sep="")]]
        s[[paste("db",L, sep="")]] = beta2*s[[paste("db",L, sep="")]] + 
            (1-beta2) * gradients[[paste('db',L,sep="")]] * gradients[[paste('db',L,sep="")]]
        
        # Compute bias-corrected second moment estimate. 
        s_corrected[[paste("dW",L, sep="")]] = s[[paste("dW",L, sep="")]]/(1-beta2^t)
        s_corrected[[paste("db",L, sep="")]] = s[[paste("db",L, sep="")]]/(1-beta2^t)
        
        # Update parameters. 
        d1=sqrt(s_corrected[[paste("dW",L, sep="")]]+epsilon)
        d2=sqrt(s_corrected[[paste("db",L, sep="")]]+epsilon)  
        
        parameters[[paste("W",L,sep="")]] = parameters[[paste("W",L,sep="")]] -
            learningRate * v_corrected[[paste("dW",L, sep="")]]/d1
        parameters[[paste("b",L,sep="")]] = parameters[[paste("b",L,sep="")]] -
            learningRate*v_corrected[[paste("db",L, sep="")]]/d2
        
    }else if (outputActivationFunc=="softmax"){
        v[[paste("dW",L, sep="")]] = beta1*v[[paste("dW",L, sep="")]] + 
            (1-beta1) * t(gradients[[paste('dW',L,sep="")]])
        v[[paste("db",L, sep="")]] = beta1*v[[paste("db",L, sep="")]] + 
            (1-beta1) * t(gradients[[paste('db',L,sep="")]])
                
        # Compute bias-corrected first moment estimate. 
        v_corrected[[paste("dW",L, sep="")]] = v[[paste("dW",L, sep="")]]/(1-beta1^t)
        v_corrected[[paste("db",L, sep="")]] = v[[paste("db",L, sep="")]]/(1-beta1^t)        
        
        # Element wise multiply of gradients
        s[[paste("dW",L, sep="")]] = beta2*s[[paste("dW",L, sep="")]] + 
            (1-beta2) * t(gradients[[paste('dW',L,sep="")]]) * t(gradients[[paste('dW',L,sep="")]])
        s[[paste("db",L, sep="")]] = beta2*s[[paste("db",L, sep="")]] + 
            (1-beta2) * t(gradients[[paste('db',L,sep="")]]) * t(gradients[[paste('db',L,sep="")]])
        
        # Compute bias-corrected second moment estimate. 
        s_corrected[[paste("dW",L, sep="")]] = s[[paste("dW",L, sep="")]]/(1-beta2^t)
        s_corrected[[paste("db",L, sep="")]] = s[[paste("db",L, sep="")]]/(1-beta2^t)
        
        # Update parameters. 
        d1=sqrt(s_corrected[[paste("dW",L, sep="")]]+epsilon)
        d2=sqrt(s_corrected[[paste("db",L, sep="")]]+epsilon) 
        
        parameters[[paste("W",L,sep="")]] = parameters[[paste("W",L,sep="")]] -
            learningRate * v_corrected[[paste("dW",L, sep="")]]/d1
        parameters[[paste("b",L,sep="")]] = parameters[[paste("b",L,sep="")]] -
            learningRate*v_corrected[[paste("db",L, sep="")]]/d2
    }
    return(parameters)
}

5.1a. Stochastic Gradient Descent with Adam – Python

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
exec(open("DLfunctions7.py").read())
exec(open("load_mnist.py").read())
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
lbls=[]
pxls=[]
print(len(training))
#for i in range(len(training)):
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T


# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
layersDimensions=[784, 15,9,10] 
#Perform SGD with Adam optimization
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu', 
                                   outputActivationFunc="softmax",learningRate = 0.01 ,
                                   optimizer="adam", beta1=0.9, beta2=0.9, epsilon = 1e-8,
                                   mini_batch_size =512, num_epochs = 1000, print_cost = True, figure="fig5.png")

5.1b. Stochastic Gradient Descent with Adam – R

source("mnist.R")
source("DLfunctions7.R")
load_mnist()
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 1024 random samples from MNIST 
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)
layersDimensions=c(784, 15,9, 10) 
#Perform SGD with Adam
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                                  hiddenActivationFunc='tanh',
                                  outputActivationFunc="softmax",
                                  learningRate = 0.005,
                                  optimizer="adam",
                                  beta1=0.7,
                                  beta2=0.9,
                                  epsilon=10^-8,
                                  mini_batch_size = 512, 
                                  num_epochs = 5000 , 
                                  print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvalsSGD$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")

5.1c. Stochastic Gradient Descent with Adam – Octave

source("DL7functions.m")
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 1024
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);
# Set layer dimensions
layersDimensions=[784, 15, 9, 10];

# Note the high value for epsilon. 
#Otherwise GD with Adam does not seem to converge   
# Perform SGD with Adam         
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
                       hiddenActivationFunc='relu', 
                       outputActivationFunc="softmax",
                       learningRate = 0.1,
                       lrDecay=false, 
                       decayRate=1,
                       lambd=0,
                       keep_prob=1,
                       optimizer="adam",
                       beta=0.9,
                       beta1=0.9,
                       beta2=0.9,
                       epsilon=100,
                       mini_batch_size = 512, 
                       num_epochs = 5000);
plotCostVsEpochs(5000,costs)

Conclusion: In this post I discuss and implement several Stochastic Gradient Descent optimization methods. The implementation of these methods enhance my already existing generic L-Layer Deep Learning Network implementation in vectorized Python, R and Octave, which I had discussed in the previous post in this series on Deep Learning from first principles in Python, R and Octave. Check it out, if you haven’t already. As already mentioned the code for this post can be cloned/forked from Github at DeepLearning-Part7

Watch this space! I’ll be back!

Also see
1.My book ‘Practical Machine Learning with R and Python’ on Amazon
2. Deep Learning from first principles in Python, R and Octave – Part 3
3. Experiments with deblurring using OpenCV
3. Design Principles of Scalable, Distributed Systems
4. Natural language processing: What would Shakespeare say?
5. yorkr crashes the IPL party! – Part 3!
6. cricketr flexes new muscles: The final analysis

To see all post click Index of posts

Deep Learning from first principles in Python, R and Octave – Part 3

“Once upon a time, I, Chuang Tzu, dreamt I was a butterfly, fluttering hither and thither, to all intents and purposes a butterfly. I was conscious only of following my fancies as a butterfly, and was unconscious of my individuality as a man. Suddenly, I awoke, and there I lay, myself again. Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly dreaming that I am a man.”
from The Brain: The Story of you – David Eagleman

“Thought is a great big vector of neural activity”
Prof Geoffrey Hinton

Introduction

This is the third part in my series on Deep Learning from first principles in Python, R and Octave. In the first part Deep Learning from first principles in Python, R and Octave-Part 1, I implemented logistic regression as a 2 layer neural network. The 2nd part Deep Learning from first principles in Python, R and Octave-Part 2, dealt with the implementation of 3 layer Neural Networks with 1 hidden layer to perform classification tasks, where the 2 classes cannot be separated by a linear boundary. In this third part, I implement a multi-layer, Deep Learning (DL) network of arbitrary depth (any number of hidden layers) and arbitrary height (any number of activation units in each hidden layer). The implementations of these Deep Learning networks, in all the 3 parts, are based on vectorized versions in Python, R and Octave. The implementation in the 3rd part is for a L-layer Deep Netwwork, but without any regularization, early stopping, momentum or learning rate adaptation techniques. However even the barebones multi-layer DL, is a handful and has enough hyperparameters to fine-tune and adjust.

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

The implementation of the vectorized L-layer Deep Learning network in Python, R and Octave were both exhausting, and exacting!! Keeping track of the indices, layer number and matrix dimensions required quite bit of focus. While the implementation was demanding, it was also very exciting to get the code to work. The trick was to be able to shift gears between the slight quirkiness between the languages. Here are some of challenges I faced.

1. Python and Octave allow multiple return values to be unpacked in a single statement. With R, unpacking multiple return values from a list, requires the list returned, to be unpacked separately. I did see that there is a package gsubfn, which does this.  I hope this feature becomes a base R feature.
2. Python and R allow dissimilar elements to be saved and returned from functions using dictionaries or lists respectively. However there is no real equivalent in Octave. The closest I got to this functionality in Octave, was the ‘cell array’. But the cell array can be accessed only by the index, and not with the key as in a Python dictionary or R list. This makes things just a bit more difficult in Octave.
3. Python and Octave include implicit broadcasting. In R, broadcasting is not implicit, but R has a nifty function, the sweep(), with which we can broadcast either by columns or by rows
4. The closest equivalent of Python’s dictionary, or R’s list, in Octave is the cell array. However I had to manage separate cell arrays for weights and biases and during gradient descent and separate gradients dW and dB
5. In Python the rank-1 numpy arrays can be annoying at times. This issue is not present in R and Octave.

Though the number of lines of code for Deep Learning functions in Python, R and Octave are about ~350 apiece, they have been some of the most difficult code I have implemented. The current vectorized implementation supports the relu, sigmoid and tanh activation functions as of now. I will be adding other activation functions like the ‘leaky relu’, ‘softmax’ and others, to the implementation in the weeks to come.

While testing with different hyper-parameters namely i) the number of hidden layers, ii) the number of activation units in each layer, iii) the activation function and iv) the number iterations, I found the L-layer Deep Learning Network to be very sensitive to these hyper-parameters. It is not easy to tune the parameters. Adding more hidden layers, or more units per layer, does not help and mostly results in gradient descent getting stuck in some local minima. It does take a fair amount of trial and error and very close observation on how the DL network performs for logical changes. We then can zero in on the most the optimal solution. Feel free to download/fork my code from Github DeepLearning-Part 3 and play around with the hyper-parameters for your own problems.

Derivation of a Multi Layer Deep Learning Network

Note: A detailed discussion of the derivation below is available in my video presentation Neural Network 4
Lets take a simple 3 layer Neural network with 3 hidden layers and an output layer

In the forward propagation cycle the equations are

Z_{1} = W_{1}A_{0} +b_{1}  and  A_{1} = g(Z_{1})
Z_{2} = W_{2}A_{1} +b_{2}  and  A_{2} = g(Z_{2})
Z_{3} = W_{3}A_{2} +b_{3}  and A_{3} = g(Z_{3})

The loss function is given by
L = -(ylogA3 + (1-y)log(1-A3))
and dL/dA3 = -(Y/A_{3} + (1-Y)/(1-A_{3}))

For a binary classification the output activation function is the sigmoid function given by
A_{3} = 1/(1+ e^{-Z3}). It can be shown that
dA_{3}/dZ_{3} = A_{3}(1-A_3) see equation 2 in Part 1

\partial L/\partial Z_{3} = \partial L/\partial A_{3}* \partial A_{3}/\partial Z_{3} = A3-Y see equation (f) in  Part 1
and since
\partial L/\partial A_{2} = \partial L/\partial Z_{3} * \partial Z_{3}/\partial A_{2} = (A_{3} -Y) * W_{3} because \partial Z_{3}/\partial A_{2} = W_{3} -(1a)
and \partial L/\partial Z_{2} =\partial L/\partial A_{2} * \partial A_{2}/\partial Z_{2} = (A_{3} -Y) * W_{3} *g'(Z_{2}) -(1b)
\partial L/\partial W_{2} = \partial L/\partial Z_{2} * A_{1} -(1c)
since \partial Z_{2}/\partial W_{2} = A_{1}
and
\partial L/\partial b_{2} = \partial L/\partial Z_{2} -(1d)
because
\partial Z_{2}/\partial b_{2} =1

Also

\partial L/\partial A_{1} =\partial L/\partial Z_{2} * \partial Z_{2}/\partial A_{1} = \partial L/\partial Z_{2} * W_{2}     – (2a)
\partial L/\partial Z_{1} =\partial L/\partial A_{1} * \partial A_{1}/\partial Z_{1} = \partial L/\partial A_{1} * W_{2} *g'(Z_{1})          – (2b)
\partial L/\partial W_{1} = \partial L/\partial Z_{1} * A_{0} – (2c)
\partial L/\partial b_{1} = \partial L/\partial Z_{1} – (2d)

Inspecting the above equations (1a – 1d & 2a-2d), our ‘Uber deep, bottomless’ brain  can easily discern the pattern in these equations. The equation for any layer ‘l’ is of the form
Z_{l} = W_{l}A_{l-1} +b_{l}     and  A_{l} = g(Z_{l})
The equation for the backward propagation have the general form
\partial L/\partial A_{l} = \partial L/\partial Z_{l+1} * W^{l+1}
\partial L/\partial Z_{l}=\partial L/\partial A_{l} *g'(Z_{l})
\partial L/\partial W_{l} =\partial L/\partial Z_{l} *A^{l-1}
\partial L/\partial b_{l} =\partial L/\partial Z_{l}

Some other important results The derivatives of the activation functions in the implemented Deep Learning network
g(z) = sigmoid(z) = 1/(1+e^{-z}) = a g’(z) = a(1-a) – See Part 1
g(z) = tanh(z) = a g’(z) = 1 - a^{2}
g(z) = relu(z) = z  when z>0 and 0 when z 0 and 0 when z <= 0
While it appears that there is a discontinuity for the derivative at 0 the small value at the discontinuity does not present a problem

The implementation of the multi layer vectorized Deep Learning Network for Python, R and Octave is included below. For all these implementations, initially I create the size and configuration of the the Deep Learning network with the layer dimennsions So for example layersDimension Vector ‘V’ of length L indicating ‘L’ layers where

V (in Python)= [v_{0}, v_{1}, v_{2}, … v_{L-1}]
V (in R)= c(v_{1}, v_{2}, v_{3} , … v_{L})
V (in Octave)= [ v_{1} v_{2} v_{3}v_{L}]

In all of these implementations the first element is the number of input features to the Deep Learning network and the last element is always a ‘sigmoid’ activation function since all the problems deal with binary classification.

The number of elements between the first and the last element are the number of hidden layers and the magnitude of each v_{i} is the number of activation units in each hidden layer, which is specified while actually executing the Deep Learning network using the function L_Layer_DeepModel(), in all the implementations Python, R and Octave

1a. Classification with Multi layer Deep Learning Network – Relu activation(Python)

In the code below a 4 layer Neural Network is trained to generate a non-linear boundary between the classes. In the code below the ‘Relu’ Activation function is used. The number of activation units in each layer is 9. The cost vs iterations is plotted in addition to the decision boundary. Further the accuracy, precision, recall and F1 score are also computed

import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets

#from DLfunctions import plot_decision_boundary
execfile("./DLfunctions34.py") # 
os.chdir("C:\\software\\DeepLearning-Posts\\part3")

# Create clusters of 2 classes
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
                       cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Set the dimensions of DL Network 
#  Below we have 
#  2 - 2 input features
#  9,9 - 2 hidden layers with 9 activation units per layer and
#  1 - 1 sigmoid activation unit in the output layer as this is a binary classification
# The activation in the hidden layer is the 'relu' specified in L_Layer_DeepModel

layersDimensions = [2, 9, 9,1] #  4-layer model
parameters = L_Layer_DeepModel(X2, Y2, layersDimensions,hiddenActivationFunc='relu', learning_rate = 0.3,num_iterations = 2500, fig="fig1.png")
#Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X2,Y2,str(0.3),"fig2.png")

# Compute the confusion matrix
yhat = predict(parameters,X2)
from sklearn.metrics import confusion_matrix
a=confusion_matrix(Y2.T,yhat.T)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Y2.T, yhat.T)))
print('Precision: {:.2f}'.format(precision_score(Y2.T, yhat.T)))
print('Recall: {:.2f}'.format(recall_score(Y2.T, yhat.T)))
print('F1: {:.2f}'.format(f1_score(Y2.T, yhat.T)))
## Accuracy: 0.90
## Precision: 0.91
## Recall: 0.87
## F1: 0.89

For more details on metrics like Accuracy, Recall, Precision etc. used in classification take a look at my post Practical Machine Learning with R and Python – Part 2. More details about these and other metrics besides implementation of the most common machine learning algorithms are available in my book My book ‘Practical Machine Learning with R and Python’ on Amazon

1b. Classification with Multi layer Deep Learning Network – Relu activation(R)

In the code below, binary classification is performed on the same data set as above using the Relu activation function. The DL network is same as above

library(ggplot2)
# Read the data
z <- as.matrix(read.csv("data.csv",header=FALSE)) 
x <- z[,1:2]
y <- z[,3]
X1 <- t(x)
Y1 <- t(y)

# Set the dimensions of the Deep Learning network
# No of input features =2, 2 hidden layers with 9 activation units and 1 output layer
layersDimensions = c(2, 9, 9,1)
# Execute the Deep Learning Neural Network
retvals = L_Layer_DeepModel(X1, Y1, layersDimensions,
                               hiddenActivationFunc='relu', 
                               learningRate = 0.3,
                               numIterations = 5000, 
                               print_cost = True)
library(ggplot2)
source("DLfunctions33.R")
# Get the computed costs
costs <- retvals[['costs']]
# Create a sequence of iterations
numIterations=5000
iterations <- seq(0,numIterations,by=1000)
df <-data.frame(iterations,costs)
# Plot the Costs vs number of iterations
ggplot(df,aes(x=iterations,y=costs)) + geom_point() +geom_line(color="blue") +
    xlab('No of iterations') + ylab('Cost') + ggtitle("Cost vs No of iterations")

# Plot the decision boundary
plotDecisionBoundary(z,retvals,hiddenActivationFunc="relu",0.3)

library(caret)
# Predict the output for the data values
yhat <-predict(retvals$parameters,X1,hiddenActivationFunc="relu")
yhat[yhat==FALSE]=0
yhat[yhat==TRUE]=1
# Compute the confusion matrix
confusionMatrix(yhat,Y1)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction   0   1
##          0 201  10
##          1  21 168
##                                           
##                Accuracy : 0.9225          
##                  95% CI : (0.8918, 0.9467)
##     No Information Rate : 0.555           
##     P-Value [Acc > NIR] : < 2e-16         
##                                           
##                   Kappa : 0.8441          
##  Mcnemar's Test P-Value : 0.07249         
##                                           
##             Sensitivity : 0.9054          
##             Specificity : 0.9438          
##          Pos Pred Value : 0.9526          
##          Neg Pred Value : 0.8889          
##              Prevalence : 0.5550          
##          Detection Rate : 0.5025          
##    Detection Prevalence : 0.5275          
##       Balanced Accuracy : 0.9246          
##                                           
##        'Positive' Class : 0               
## 

1c. Classification with Multi layer Deep Learning Network – Relu activation(Octave)

Included below is the code for performing classification. Incidentally Octave does not seem to have implemented the confusion matrix,  but confusionmat is available in Matlab.
# Read the data
data=csvread("data.csv");
X=data(:,1:2);
Y=data(:,3);
# Set layer dimensions
layersDimensions = [2 9 7 1] #tanh=-0.5(ok), #relu=0.1 best!
# Execute Deep Network
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
hiddenActivationFunc='relu',
learningRate = 0.1,
numIterations = 10000);
plotCostVsIterations(10000,costs);
plotDecisionBoundary(data,weights, biases,hiddenActivationFunc="tanh")


2a. Classification with Multi layer Deep Learning Network – Tanh activation(Python)

Below the Tanh activation function is used to perform the same classification. I found the Tanh activation required a simpler Neural Network of 3 layers.

# Tanh activation
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets

#from DLfunctions import plot_decision_boundary
os.chdir("C:\\software\\DeepLearning-Posts\\part3")
execfile("./DLfunctions34.py") 
# Create the dataset
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
                       cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Set the dimensions of the Neural Network
layersDimensions = [2, 4, 1] #  3-layer model
# Compute the DL network
parameters = L_Layer_DeepModel(X2, Y2, layersDimensions, hiddenActivationFunc='tanh', learning_rate = .5,num_iterations = 2500,fig="fig3.png")
#Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X2,Y2,str(0.5),"fig4.png")

2b. Classification with Multi layer Deep Learning Network – Tanh activation(R)

R performs better with a Tanh activation than the Relu as can be seen below

 #Set the dimensions of the Neural Network
layersDimensions = c(2, 9, 9,1)
library(ggplot2)
# Read the data
z <- as.matrix(read.csv("data.csv",header=FALSE)) 
x <- z[,1:2]
y <- z[,3]
X1 <- t(x)
Y1 <- t(y)
# Execute the Deep Model
retvals = L_Layer_DeepModel(X1, Y1, layersDimensions,
                            hiddenActivationFunc='tanh', 
                            learningRate = 0.3,
                            numIterations = 5000, 
                            print_cost = True)
# Get the costs
costs <- retvals[['costs']]
iterations <- seq(0,numIterations,by=1000)
df <-data.frame(iterations,costs)
# Plot Cost vs number of iterations
ggplot(df,aes(x=iterations,y=costs)) + geom_point() +geom_line(color="blue") +
    xlab('No of iterations') + ylab('Cost') + ggtitle("Cost vs No of iterations")

#Plot the decision boundary
plotDecisionBoundary(z,retvals,hiddenActivationFunc="tanh",0.3)

2c. Classification with Multi layer Deep Learning Network – Tanh activation(Octave)

The code below uses the   Tanh activation in the hidden layers for Octave
# Read the data
data=csvread("data.csv");
X=data(:,1:2);
Y=data(:,3);
# Set layer dimensions
layersDimensions = [2 9 7 1] #tanh=-0.5(ok), #relu=0.1 best!
# Execute Deep Network
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
hiddenActivationFunc='tanh',
learningRate = 0.1,
numIterations = 10000);
plotCostVsIterations(10000,costs);
plotDecisionBoundary(data,weights, biases,hiddenActivationFunc="tanh")


3. Bernoulli’s Lemniscate

To make things  more interesting, I create a 2D figure of the Bernoulli’s lemniscate to perform non-linear classification. The Lemniscate is given by the equation
(x^{2} + y^{2})^{2} = 2a^{2}*(x^{2}-y^{2})

3a. Classifying a lemniscate with Deep Learning Network – Relu activation(Python)

import os
import numpy as np 
import matplotlib.pyplot as plt
os.chdir("C:\\software\\DeepLearning-Posts\\part3")
execfile("./DLfunctions33.py") 
x1=np.random.uniform(0,10,2000).reshape(2000,1)
x2=np.random.uniform(0,10,2000).reshape(2000,1)

X=np.append(x1,x2,axis=1)
X.shape

# Create a subset of values where squared is <0,4. Perform ravel() to flatten this vector
# Create the equation
# (x^{2} + y^{2})^2 - 2a^2*(x^{2}-y^{2}) <= 0
a=np.power(np.power(X[:,0]-5,2) + np.power(X[:,1]-5,2),2)
b=np.power(X[:,0]-5,2) - np.power(X[:,1]-5,2)
c= a - (b*np.power(4,2)) <=0
Y=c.reshape(2000,1)
# Create a scatter plot of the lemniscate
plt.scatter(X[:,0], X[:,1], c=Y, marker= 'o', s=15,cmap="viridis")
Z=np.append(X,Y,axis=1)
plt.savefig("fig50.png",bbox_inches='tight')
plt.clf()

# Set the data for classification
X2=X.T
Y2=Y.T
# These settings work the best
# Set the Deep Learning layer dimensions for a Relu activation
layersDimensions = [2,7,4,1]
#Execute the DL network
parameters = L_Layer_DeepModel(X2, Y2, layersDimensions, hiddenActivationFunc='relu', learning_rate = 0.5,num_iterations = 10000, fig="fig5.png")
#Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X2, Y2,str(2.2),"fig6.png")

# Compute the Confusion matrix
yhat = predict(parameters,X2)
from sklearn.metrics import confusion_matrix
a=confusion_matrix(Y2.T,yhat.T)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Y2.T, yhat.T)))
print('Precision: {:.2f}'.format(precision_score(Y2.T, yhat.T)))
print('Recall: {:.2f}'.format(recall_score(Y2.T, yhat.T)))
print('F1: {:.2f}'.format(f1_score(Y2.T, yhat.T)))
## Accuracy: 0.93
## Precision: 0.77
## Recall: 0.76
## F1: 0.76

We could get better performance by tuning further. Do play around if you fork the code.
Note:: The lemniscate data is saved as a CSV and then read in R and also in Octave. I do this instead of recreating the lemniscate shape

3b. Classifying a lemniscate with Deep Learning Network – Relu activation(R code)

The R decision boundary for the Bernoulli’s lemniscate is shown below

Z <- as.matrix(read.csv("lemniscate.csv",header=FALSE))
Z1=data.frame(Z)
# Create a scatter plot of the lemniscate
ggplot(Z1,aes(x=V1,y=V2,col=V3)) +geom_point()
#Set the data for the DL network
X=Z[,1:2]
Y=Z[,3]

X1=t(X)
Y1=t(Y)

# Set the layer dimensions for the tanh activation function
layersDimensions = c(2,5,4,1)
# Execute the Deep Learning network with Tanh activation
retvals = L_Layer_DeepModel(X1, Y1, layersDimensions, 
                               hiddenActivationFunc='tanh', 
                               learningRate = 0.3,
                               numIterations = 20000, print_cost = True)
# Plot cost vs iteration
costs <- retvals[['costs']]
numIterations = 20000
iterations <- seq(0,numIterations,by=1000)
df <-data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() +geom_line(color="blue") +
    xlab('No of iterations') + ylab('Cost') + ggtitle("Cost vs No of iterations")

#Plot the decision boundary
plotDecisionBoundary(Z,retvals,hiddenActivationFunc="tanh",0.3)

3c. Classifying a lemniscate with Deep Learning Network – Relu activation(Octave code)

Octave is used to generate the non-linear lemniscate boundary.

# Read the data
data=csvread("lemniscate.csv");
X=data(:,1:2);
Y=data(:,3);
# Set the dimensions of the layers
layersDimensions = [2 9 7 1]
# Compute the DL network
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
hiddenActivationFunc='relu',
learningRate = 0.20,
numIterations = 10000);
plotCostVsIterations(10000,costs);
plotDecisionBoundary(data,weights, biases,hiddenActivationFunc="relu")


4a. Binary Classification using MNIST – Python code

Finally I perform a simple classification using the MNIST handwritten digits, which according to Prof Geoffrey Hinton is “the Drosophila of Deep Learning”.

The Python code for reading the MNIST data is taken from Alex Kesling’s github link MNIST.

In the Python code below, I perform a simple binary classification between the handwritten digit ‘5’ and ‘not 5’ which is all other digits. I will perform the proper classification of all digits using the  Softmax classifier some time later.

import os
import numpy as np 
import matplotlib.pyplot as plt
os.chdir("C:\\software\\DeepLearning-Posts\\part3")
execfile("./DLfunctions34.py") 
execfile("./load_mnist.py")
training=list(read(dataset='training',path="./mnist"))
test=list(read(dataset='testing',path="./mnist"))
lbls=[]
pxls=[]
print(len(training))

# Select the first 10000 training data and the labels
for i in range(10000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)   

#  Sey y=1  when labels == 5 and 0 otherwise
y=(labels==5).reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)

# Create the necessary feature and target variable
X1=X.T
Y1=y.T

# Create the layer dimensions. The number of features are 28 x 28 = 784 since the 28 x 28
# pixels is flattened to single vector of length 784.
layersDimensions=[784, 15,9,7,1] # Works very well
parameters = L_Layer_DeepModel(X1, Y1, layersDimensions, hiddenActivationFunc='relu', learning_rate = 0.1,num_iterations = 1000, fig="fig7.png")

# Test data
lbls1=[]
pxls1=[]
for i in range(800):
       l,p=test[i]
       lbls1.append(l)
       pxls1.append(p)
 
testLabels=np.array(lbls1)
testData=np.array(pxls1)

ytest=(testLabels==5).reshape(-1,1)
Xtest=testData.reshape(testData.shape[0],-1)
Xtest1=Xtest.T
Ytest1=ytest.T

yhat = predict(parameters,Xtest1)
from sklearn.metrics import confusion_matrix
a=confusion_matrix(Ytest1.T,yhat.T)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Ytest1.T, yhat.T)))
print('Precision: {:.2f}'.format(precision_score(Ytest1.T, yhat.T)))
print('Recall: {:.2f}'.format(recall_score(Ytest1.T, yhat.T)))
print('F1: {:.2f}'.format(f1_score(Ytest1.T, yhat.T)))

probs=predict_proba(parameters,Xtest1)
from sklearn.metrics import precision_recall_curve

precision, recall, thresholds = precision_recall_curve(Ytest1.T, probs.T)
closest_zero = np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.plot(precision, recall, label='Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.savefig("fig8.png",bbox_inches='tight')

## Accuracy: 0.99
## Precision: 0.96
## Recall: 0.89
## F1: 0.92

In addition to plotting the Cost vs Iterations, I also plot the Precision-Recall curve to show how the Precision and Recall, which are complementary to each other vary with respect to the other. To know more about Precision-Recall, please check my post Practical Machine Learning with R and Python – Part 4.

Check out my compact and minimal book  “Practical Machine Learning with R and Python:Second edition- Machine Learning in stereo”  available in Amazon in paperback($10.99) and kindle($7.99) versions. My book includes implementations of key ML algorithms and associated measures and metrics. The book is ideal for anybody who is familiar with the concepts and would like a quick reference to the different ML algorithms that can be applied to problems and how to select the best model. Pick your copy today!!

A physical copy of the book is much better than scrolling down a webpage. Personally, I tend to use my own book quite frequently to refer to R, Python constructs,  subsetting, machine Learning function calls and the necessary parameters etc. It is useless to commit any of this to memory, and a physical copy of a book is much easier to thumb through for the relevant code snippet. Pick up your copy today!

4b. Binary Classification using MNIST – R code

In the R code below the same binary classification of the digit ‘5’ and the ‘not 5’ is performed. The code to read and display the MNIST data is taken from Brendan O’ Connor’s github link at MNIST

source("mnist.R")
load_mnist()
#show_digit(train$x[2,]
layersDimensions=c(784, 7,7,3,1) # Works at 1500
x <- t(train$x)
# Choose only 5000 training data
x2 <- x[,1:5000]
y <-train$y
# Set labels for all digits that are 'not 5' to 0
y[y!=5] <- 0
# Set labels of digit 5 as 1
y[y==5] <- 1
# Set the data
y1 <- as.matrix(y)
y2 <- t(y1)
# Choose the 1st 5000 data
y3 <- y2[,1:5000]

#Execute the Deep Learning Model
retvals = L_Layer_DeepModel(x2, y3, layersDimensions, 
                               hiddenActivationFunc='tanh', 
                               learningRate = 0.3,
                               numIterations = 3000, print_cost = True)
# Plot cost vs iteration
costs <- retvals[['costs']]
numIterations = 3000
iterations <- seq(0,numIterations,by=1000)
df <-data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() +geom_line(color="blue") +
    xlab('No of iterations') + ylab('Cost') + ggtitle("Cost vs No of iterations")

# Compute probability scores
scores <- computeScores(retvals$parameters, x2,hiddenActivationFunc='relu')
a=y3==1
b=y3==0

# Compute probabilities of class 0 and class 1
class1=scores[a]
class0=scores[b]

# Plot ROC curve
pr <-pr.curve(scores.class0=class1,
        scores.class1=class0,
       curve=T)

plot(pr)

The AUC curve hugs the top left corner and hence the performance of the classifier is quite good.

4c. Binary Classification using MNIST – Octave code

This code to load MNIST data was taken from Daniel E blog.
Precision recall curves are available in Matlab but are yet to be implemented in Octave’s statistics package.

load('./mnist/mnist.txt.gz'); % load the dataset
# Subset the 'not 5' digits
a=(trainY != 5);
# Subset '5'
b=(trainY == 5);
#make a copy of trainY
#Set 'not 5' as 0 and '5' as 1
y=trainY;
y(a)=0;
y(b)=1;
X=trainX(1:5000,:);
Y=y(1:5000);
# Set the dimensions of layer
layersDimensions=[784, 7,7,3,1];
# Compute the DL network
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
hiddenActivationFunc='relu',
learningRate = 0.1,
numIterations = 5000);

Conclusion

It was quite a challenge coding a Deep Learning Network in Python, R and Octave. The Deep Learning network implementation, in this post,is the base Deep Learning network, without any of the regularization methods included. Here are some key learning that I got while playing with different multi-layer networks on different problems

a. Deep Learning Networks come with many levers, the hyper-parameters,
– learning rate
– activation unit
– number of hidden layers
– number of units per hidden layer
– number of iterations while performing gradient descent
b. Deep Networks are very sensitive. A change in any of the hyper-parameter makes it perform very differently
c. Initially I thought adding more hidden layers, or more units per hidden layer will make the DL network better at learning. On the contrary, there is a performance degradation after the optimal DL configuration
d. At a sub-optimal number of hidden layers or number of hidden units, gradient descent seems to get stuck at a local minima
e. There were occasions when the cost came down, only to increase slowly as the number of iterations were increased. Probably early stopping would have helped.
f. I also did come across situations of ‘exploding/vanishing gradient’, cost went to Inf/-Inf. Here I would think inclusion of ‘momentum method’ would have helped

I intend to add the additional hyper-parameters of L1, L2 regularization, momentum method, early stopping etc. into the code in my future posts.
Feel free to fork/clone the code from Github Deep Learning – Part 3, and take the DL network apart and play around with it.

I will be continuing this series with more hyper-parameters to handle vanishing and exploding gradients, early stopping and regularization in the weeks to come. I also intend to add some more activation functions to this basic Multi-Layer Network.
Hang around, there are more exciting things to come.

Watch this space!

References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. Deep Learning, Ian Goodfellow, Yoshua Bengio and Aaron Courville
4. Neural Networks: The mechanics of backpropagation
5. Machine Learning

Also see
1.My book ‘Practical Machine Learning with R and Python’ on Amazon
2. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
3. Designing a Social Web Portal
4. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables
4. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
6. Presentation on “Intelligent Networks, CAMEL protocol, services & applications
7. Design Principles of Scalable, Distributed Systems

To see all posts see Index of posts

Deep Learning from first principles in Python, R and Octave – Part 2

“What does the world outside your head really ‘look’ like? Not only is there no color, there’s also no sound: the compression and expansion of air is picked up by the ears, and turned into electrical signals. The brain then presents these signals to us as mellifluous tones and swishes and clatters and jangles. Reality is also odorless: there’s no such thing as smell outside our brains. Molecules floating through the air bind to receptors in our nose and are interpreted as different smells by our brain. The real world is not full of rich sensory events; instead, our brains light up the world with their own sensuality.”
The Brain: The Story of You” by David Eagleman

The world is Maya, illusory. The ultimate reality, the Brahman, is all-pervading and all-permeating, which is colourless, odourless, tasteless, nameless and formless
Bhagavad Gita

1. Introduction

This post is a follow-up post to my earlier post Deep Learning from first principles in Python, R and Octave-Part 1. In the first part, I implemented Logistic Regression, in vectorized Python,R and Octave, with a wannabe Neural Network (a Neural Network with no hidden layers). In this second part, I implement a regular, but somewhat primitive Neural Network (a Neural Network with just 1 hidden layer). The 2nd part implements classification of manually created datasets, where the different clusters of the 2 classes are not linearly separable.

Neural Network perform really well in learning all sorts of non-linear boundaries between classes. Initially logistic regression is used perform the classification and the decision boundary is plotted. Vanilla logistic regression performs quite poorly. Using SVMs with a radial basis kernel would have performed much better in creating non-linear boundaries. To see R and Python implementations of SVMs take a look at my post Practical Machine Learning with R and Python – Part 4.

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

You may also like my companion book “Practical Machine Learning with R and Python:Second Edition- Machine Learning in stereo” available in Amazon in paperback($10.99) and Kindle($7.99/Rs449) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.

Take a look at my video presentation which discusses the below derivation step-by- step Elements of Neural Networks and Deep Learning – Part 3

You can clone and fork this R Markdown file along with the vectorized implementations of the 3 layer Neural Network for Python, R and Octave from Github DeepLearning-Part2

2. The 3 layer Neural Network

A simple representation of a 3 layer Neural Network (NN) with 1 hidden layer is shown below.

In the above Neural Network, there are 2 input features at the input layer, 3 hidden units at the hidden layer and 1 output layer as it deals with binary classification. The activation unit at the hidden layer can be a tanh, sigmoid, relu etc. At the output layer the activation is a sigmoid to handle binary classification

# Superscript indicates layer 1
z_{11} = w_{11}^{1}x_{1} + w_{21}^{1}x_{2} + b_{1}
z_{12} = w_{12}^{1}x_{1} + w_{22}^{1}x_{2} + b_{1}
z_{13} = w_{13}^{1}x_{1} + w_{23}^{1}x_{2} + b_{1}

Also a_{11} = tanh(z_{11})
a_{12} = tanh(z_{12})
a_{13} = tanh(z_{13})

# Superscript indicates layer 2
z_{21} = w_{11}^{2}a_{11} + w_{21}^{2}a_{12} + w_{31}^{2}a_{13} + b_{2}
a_{21} = sigmoid(z21)

Hence
Z1= \begin{pmatrix}  z11\\  z12\\  z13  \end{pmatrix} =\begin{pmatrix}  w_{11}^{1} & w_{21}^{1} \\  w_{12}^{1} & w_{22}^{1} \\  w_{13}^{1} & w_{23}^{1}  \end{pmatrix} * \begin{pmatrix}  x1\\  x2  \end{pmatrix} + b_{1}
And
A1= \begin{pmatrix}  a11\\  a12\\  a13  \end{pmatrix} = \begin{pmatrix}  tanh(z11)\\  tanh(z12)\\  tanh(z13)  \end{pmatrix}

Similarly
Z2= z_{21}  = \begin{pmatrix}  w_{11}^{2} & w_{21}^{2} & w_{31}^{2}  \end{pmatrix} *\begin{pmatrix}  z_{11}\\  z_{12}\\  z_{13}  \end{pmatrix} +b_{2}
and A2 = a_{21} = sigmoid(z_{21})

These equations can be written as
Z1 = W1 * X + b1
A1 = tanh(Z1)
Z2 = W2 * A1 + b2
A2 = sigmoid(Z2)

I) Some important results (a memory refresher!)
d/dx(e^{x}) = e^{x} and d/dx(e^{-x}) = -e^{-x} -(a) and
sinhx = (e^{x} - e^{-x})/2 and coshx = (e^{x} + e^{-x})/2
Using (a) we can shown that d/dx(sinhx) = coshx and d/dx(coshx) = sinhx (b)
Now d/dx(f(x)/g(x)) = (g(x)*d/dx(f(x)) - f(x)*d/dx(g(x)))/g(x)^{2} -(c)

Since tanhx =z= sinhx/coshx and using (b) we get
tanhx = (coshx*d/dx(sinhx) - sinhx*d/dx(coshx))/(cosh^{2})
Using the values of the derivatives of sinhx and coshx from (b) above we get
d/dx(tanhx) = (coshx^{2} - sinhx{2})/coshx{2} = 1 - tanhx^{2}
Since tanhx =z
d/dx(tanhx) = 1 - tanhx^{2}= 1 - z^{2} -(d)

II) Derivatives
L=-(Ylog(A2) + (1-Y)log(1-A2))
dL/dA2 = -(Y/A2 + (1-Y)/(1-A2))
Since A2 = sigmoid(Z2) therefore dA2/dZ2 = A2(1-A2) see Part1
Z2 = W2A1 +b2
dZ2/dW2 = A1
dZ2/db2 = 1
A1 = tanh(Z1) and dA1/dZ1 = 1 - A1^{2}
Z1 = W1X + b1
dZ1/dW1 = X
dZ1/db1 = 1

III) Back propagation
Using the derivatives from II) we can derive the following results using Chain Rule
\partial L/\partial Z2 = \partial L/\partial A2 * \partial A2/\partial Z2
= -(Y/A2 + (1-Y)/(1-A2)) * A2(1-A2) = A2 - Y
\partial L/\partial W2 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial W2
= (A2-Y) *A1 -(A)
\partial L/\partial b2 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial b2 = (A2-Y) -(B)

\partial L/\partial Z1 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial A1 *\partial A1/\partial Z1 = (A2-Y) * W2 * (1-A1^{2})
\partial L/\partial W1 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial A1 *\partial A1/\partial Z1 *\partial Z1/\partial W1
=(A2-Y) * W2 * (1-A1^{2}) * X -(C)
\partial L/\partial b1 = \partial L/\partial A2 * \partial A2/\partial Z2 * \partial Z2/\partial A1 *dA1/dZ1 *dZ1/db1
= (A2-Y) * W2 * (1-A1^{2}) -(D)

IV) Gradient Descent
The key computations in the backward cycle are
W1 = W1-learningRate * \partial L/\partial W1 – From (C)
b1 = b1-learningRate * \partial L/\partial b1 – From (D)
W2 = W2-learningRate * \partial L/\partial W2 – From (A)
b2 = b2-learningRate * \partial L/\partial b2 – From (B)

The weights and biases (W1,b1,W2,b2) are updated for each iteration thus minimizing the loss/cost.

These derivations can be represented pictorially using the computation graph (from the book Deep Learning by Ian Goodfellow, Joshua Bengio and Aaron Courville)

3. Manually create a data set that is not lineary separable

Initially I create a dataset with 2 classes which has around 9 clusters that cannot be separated by linear boundaries. Note: This data set is saved as data.csv and is used for the R and Octave Neural networks to see how they perform on the same dataset.

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets


colors=['black','gold']
cmap = matplotlib.colors.ListedColormap(colors)
X, y = make_blobs(n_samples = 400, n_features = 2, centers = 7,
                       cluster_std = 1.3, random_state = 4)
#Create 2 classes
y=y.reshape(400,1)
y = y % 2
#Plot the figure
plt.figure()
plt.title('Non-linearly separable classes')
plt.scatter(X[:,0], X[:,1], c=y,
           marker= 'o', s=50,cmap=cmap)
plt.savefig('fig1.png', bbox_inches='tight')

4. Logistic Regression

On the above created dataset, classification with logistic regression is performed, and the decision boundary is plotted. It can be seen that logistic regression performs quite poorly

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets

#from DLfunctions import plot_decision_boundary
execfile("./DLfunctions.py") # Since import does not work in Rmd!!!

colors=['black','gold']
cmap = matplotlib.colors.ListedColormap(colors)
X, y = make_blobs(n_samples = 400, n_features = 2, centers = 7,
                       cluster_std = 1.3, random_state = 4)
#Create 2 classes
y=y.reshape(400,1)
y = y % 2

# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X, y);

# Plot the decision boundary for logistic regression
plot_decision_boundary_n(lambda x: clf.predict(x), X.T, y.T,"fig2.png")

5. The 3 layer Neural Network in Python (vectorized)

The vectorized implementation is included below. Note that in the case of Python a learning rate of 0.5 and 3 hidden units performs very well.

## Random data set with 9 clusters
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd

from sklearn.datasets import make_classification, make_blobs
execfile("./DLfunctions.py") # Since import does not work in Rmd!!!

X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
                       cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T

#Perform gradient descent
parameters,costs = computeNN(X2, Y2, numHidden = 4, learningRate=0.5, numIterations = 10000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X2, Y2,str(4),str(0.5),"fig3.png")
## Cost after iteration 0: 0.692669
## Cost after iteration 1000: 0.246650
## Cost after iteration 2000: 0.227801
## Cost after iteration 3000: 0.226809
## Cost after iteration 4000: 0.226518
## Cost after iteration 5000: 0.226331
## Cost after iteration 6000: 0.226194
## Cost after iteration 7000: 0.226085
## Cost after iteration 8000: 0.225994
## Cost after iteration 9000: 0.225915

 

6. The 3 layer Neural Network in R (vectorized)

For this the dataset created by Python is saved  to see how R performs on the same dataset. The vectorized implementation of a Neural Network was just a little more interesting as R does not have a similar package like ‘numpy’. While numpy handles broadcasting implicitly, in R I had to use the ‘sweep’ command to broadcast. The implementaion is included below. Note that since the initialization with random weights is slightly different, R performs best with a learning rate of 0.1 and with 6 hidden units

source("DLfunctions2_1.R")
z <- as.matrix(read.csv("data.csv",header=FALSE)) # 
x <- z[,1:2]
y <- z[,3]
x1 <- t(x)
y1 <- t(y)
#Perform gradient descent
nn <-computeNN(x1, y1, 6, learningRate=0.1,numIterations=10000) # Good
## [1] 0.7075341
## [1] 0.2606695
## [1] 0.2198039
## [1] 0.2091238
## [1] 0.211146
## [1] 0.2108461
## [1] 0.2105351
## [1] 0.210211
## [1] 0.2099104
## [1] 0.2096437
## [1] 0.209409
plotDecisionBoundary(z,nn,6,0.1)

 

 7.  The 3 layer Neural Network in Octave (vectorized)

This uses the same dataset that was generated using Python code.
source("DL-function2.m")
data=csvread("data.csv");
X=data(:,1:2);
Y=data(:,3);
# Make sure that the model parameters are correct. Take the transpose of X & Y

#Perform gradient descent
[W1,b1,W2,b2,costs]= computeNN(X', Y',4, learningRate=0.5, numIterations = 10000);

8a. Performance  for different learning rates (Python)

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd

from sklearn.datasets import make_classification, make_blobs
execfile("./DLfunctions.py") # Since import does not work in Rmd!!!
# Create data
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
                       cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Create a list of learning rates
learningRate=[0.5,1.2,3.0]
df=pd.DataFrame()
#Compute costs for each learning rate
for lr in learningRate:
   parameters,costs = computeNN(X2, Y2, numHidden = 4, learningRate=lr, numIterations = 10000)
   print(costs)
   df1=pd.DataFrame(costs)
   df=pd.concat([df,df1],axis=1)
#Set the iterations
iterations=[0,1000,2000,3000,4000,5000,6000,7000,8000,9000]   
#Create data frame
#Set index
df1=df.set_index([iterations])
df1.columns=[0.5,1.2,3.0]
fig=df1.plot()
fig=plt.title("Cost vs No of Iterations for different learning rates")
plt.savefig('fig4.png', bbox_inches='tight')

8b. Performance  for different hidden units (Python)

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd

from sklearn.datasets import make_classification, make_blobs
execfile("./DLfunctions.py") # Since import does not work in Rmd!!!
#Create data set
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
                       cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Make a list of hidden unis
numHidden=[3,5,7]
df=pd.DataFrame()
#Compute costs for different hidden units
for numHid in numHidden:
   parameters,costs = computeNN(X2, Y2, numHidden = numHid, learningRate=1.2, numIterations = 10000)
   print(costs)
   df1=pd.DataFrame(costs)
   df=pd.concat([df,df1],axis=1)
#Set the iterations
iterations=[0,1000,2000,3000,4000,5000,6000,7000,8000,9000]   
#Set index
df1=df.set_index([iterations])
df1.columns=[3,5,7]
#Plot
fig=df1.plot()
fig=plt.title("Cost vs No of Iterations for different no of hidden units")
plt.savefig('fig5.png', bbox_inches='tight')

9a. Performance  for different learning rates (R)

source("DLfunctions2_1.R")
# Read data
z <- as.matrix(read.csv("data.csv",header=FALSE)) # 
x <- z[,1:2]
y <- z[,3]
x1 <- t(x)
y1 <- t(y)
#Loop through learning rates and compute costs
learningRate <-c(0.1,1.2,3.0)
df <- NULL
for(i in seq_along(learningRate)){
   nn <-  computeNN(x1, y1, 6, learningRate=learningRate[i],numIterations=10000) 
   cost <- nn$costs
   df <- cbind(df,cost)
  
}      

#Create dataframe
df <- data.frame(df) 
iterations=seq(0,10000,by=1000)
df <- cbind(iterations,df)
names(df) <- c("iterations","0.5","1.2","3.0")
library(reshape2)
df1 <- melt(df,id="iterations")  # Melt the data
#Plot  
ggplot(df1) + geom_line(aes(x=iterations,y=value,colour=variable),size=1)  + 
    xlab("Iterations") +
    ylab('Cost') + ggtitle("Cost vs No iterations for  different learning rates")

9b. Performance  for different hidden units (R)

source("DLfunctions2_1.R")
# Loop through Num hidden units
numHidden <-c(4,6,9)
df <- NULL
for(i in seq_along(numHidden)){
    nn <-  computeNN(x1, y1, numHidden[i], learningRate=0.1,numIterations=10000) 
    cost <- nn$costs
    df <- cbind(df,cost)
    
}      
df <- data.frame(df) 
iterations=seq(0,10000,by=1000)
df <- cbind(iterations,df)
names(df) <- c("iterations","4","6","9")
library(reshape2)
# Melt
df1 <- melt(df,id="iterations") 
# Plot   
ggplot(df1) + geom_line(aes(x=iterations,y=value,colour=variable),size=1)  + 
    xlab("Iterations") +
    ylab('Cost') + ggtitle("Cost vs No iterations for  different number of hidden units")

10a. Performance of the Neural Network for different learning rates (Octave)

source("DL-function2.m")
plotLRCostVsIterations()
print -djph figa.jpg

10b. Performance of the Neural Network for different number of hidden units (Octave)

source("DL-function2.m")
plotHiddenCostVsIterations()
print -djph figa.jpg

11. Turning the heat on the Neural Network

In this 2nd part I create a a central region of positives and and the outside region as negatives. The points are generated using the equation of a circle (x – a)^{2} + (y -b) ^{2} = R^{2} . How does the 3 layer Neural Network perform on this?  Here’s a look! Note: The same dataset is also used for R and Octave Neural Network constructions

12. Manually creating a circular central region

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets

colors=['black','gold']
cmap = matplotlib.colors.ListedColormap(colors)
x1=np.random.uniform(0,10,800).reshape(800,1)
x2=np.random.uniform(0,10,800).reshape(800,1)
X=np.append(x1,x2,axis=1)
X.shape
# Create (x-a)^2 + (y-b)^2 = R^2
# Create a subset of values where squared is <0,4. Perform ravel() to flatten this vector
a=(np.power(X[:,0]-5,2) + np.power(X[:,1]-5,2) <= 6).ravel()
Y=a.reshape(800,1)

cmap = matplotlib.colors.ListedColormap(colors)

plt.figure()
plt.title('Non-linearly separable classes')
plt.scatter(X[:,0], X[:,1], c=Y,
           marker= 'o', s=15,cmap=cmap)
plt.savefig('fig6.png', bbox_inches='tight')

13a. Decision boundary with hidden units=4 and learning rate = 2.2 (Python)

With the above hyper parameters the decision boundary is triangular

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model
execfile("./DLfunctions.py")
x1=np.random.uniform(0,10,800).reshape(800,1)
x2=np.random.uniform(0,10,800).reshape(800,1)
X=np.append(x1,x2,axis=1)
X.shape

# Create a subset of values where squared is <0,4. Perform ravel() to flatten this vector
a=(np.power(X[:,0]-5,2) + np.power(X[:,1]-5,2) <= 6).ravel()
Y=a.reshape(800,1)

X2=X.T
Y2=Y.T

parameters,costs = computeNN(X2, Y2, numHidden = 4, learningRate=2.2, numIterations = 10000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X2, Y2,str(4),str(2.2),"fig7.png")
## Cost after iteration 0: 0.692836
## Cost after iteration 1000: 0.331052
## Cost after iteration 2000: 0.326428
## Cost after iteration 3000: 0.474887
## Cost after iteration 4000: 0.247989
## Cost after iteration 5000: 0.218009
## Cost after iteration 6000: 0.201034
## Cost after iteration 7000: 0.197030
## Cost after iteration 8000: 0.193507
## Cost after iteration 9000: 0.191949

13b. Decision boundary with hidden units=12 and learning rate = 2.2 (Python)

With the above hyper parameters the decision boundary is triangular

import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model
execfile("./DLfunctions.py")
x1=np.random.uniform(0,10,800).reshape(800,1)
x2=np.random.uniform(0,10,800).reshape(800,1)
X=np.append(x1,x2,axis=1)
X.shape

# Create a subset of values where squared is <0,4. Perform ravel() to flatten this vector
a=(np.power(X[:,0]-5,2) + np.power(X[:,1]-5,2) <= 6).ravel()
Y=a.reshape(800,1)

X2=X.T
Y2=Y.T

parameters,costs = computeNN(X2, Y2, numHidden = 12, learningRate=2.2, numIterations = 10000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X2, Y2,str(12),str(2.2),"fig8.png")
## Cost after iteration 0: 0.693291
## Cost after iteration 1000: 0.383318
## Cost after iteration 2000: 0.298807
## Cost after iteration 3000: 0.251735
## Cost after iteration 4000: 0.177843
## Cost after iteration 5000: 0.130414
## Cost after iteration 6000: 0.152400
## Cost after iteration 7000: 0.065359
## Cost after iteration 8000: 0.050921
## Cost after iteration 9000: 0.039719

14a. Decision boundary with hidden units=9 and learning rate = 0.5 (R)

When the number of hidden units is 6 and the learning rate is 0,1, is also a triangular shape in R

source("DLfunctions2_1.R")
z <- as.matrix(read.csv("data1.csv",header=FALSE)) # N
x <- z[,1:2]
y <- z[,3]
x1 <- t(x)
y1 <- t(y)
nn <-computeNN(x1, y1, 9, learningRate=0.5,numIterations=10000) # Triangular
## [1] 0.8398838
## [1] 0.3303621
## [1] 0.3127731
## [1] 0.3012791
## [1] 0.3305543
## [1] 0.3303964
## [1] 0.2334615
## [1] 0.1920771
## [1] 0.2341225
## [1] 0.2188118
## [1] 0.2082687
plotDecisionBoundary(z,nn,6,0.1)

14b. Decision boundary with hidden units=8 and learning rate = 0.1 (R)

source("DLfunctions2_1.R")
z <- as.matrix(read.csv("data1.csv",header=FALSE)) # N
x <- z[,1:2]
y <- z[,3]
x1 <- t(x)
y1 <- t(y)
nn <-computeNN(x1, y1, 8, learningRate=0.1,numIterations=10000) # Hemisphere
## [1] 0.7273279
## [1] 0.3169335
## [1] 0.2378464
## [1] 0.1688635
## [1] 0.1368466
## [1] 0.120664
## [1] 0.111211
## [1] 0.1043362
## [1] 0.09800573
## [1] 0.09126161
## [1] 0.0840379
plotDecisionBoundary(z,nn,8,0.1)

15a. Decision boundary with hidden units=12 and learning rate = 1.5 (Octave)

source("DL-function2.m")
data=csvread("data1.csv");
X=data(:,1:2);
Y=data(:,3);
# Make sure that the model parameters are correct. Take the transpose of X & Y
[W1,b1,W2,b2,costs]= computeNN(X', Y',12, learningRate=1.5, numIterations = 10000);
plotDecisionBoundary(data, W1,b1,W2,b2)
print -djpg fige.jpg

Conclusion: This post implemented a 3 layer Neural Network to create non-linear boundaries while performing classification. Clearly the Neural Network performs very well when the number of hidden units and learning rate are varied.

To be continued…
Watch this space!!

References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. Deep Learning, Ian Goodfellow, Yoshua Bengio and Aaron Courville
4. Neural Networks: The mechanics of backpropagation
5. Machine Learning

Also see
1. My book ‘Practical Machine Learning with R and Python’ on Amazon
2. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables
3. The 3rd paperback & kindle editions of my books on Cricket, now on Amazon
4. Exploring Quantum Gate operations with QCSimulator
5. Simulating a Web Joint in Android
6. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
7. Presentation on Wireless Technologies – Part 1

To see all posts check Index of posts