My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 4,5

This is the next set of presentations on “Elements of Neural Networks and Deep Learning”.  In the 4th presentation I discuss and derive the generalized equations for a multi-unit, multi-layer Deep Learning network.  The 5th presentation derives the equations for a Deep Learning network when performing multi-class classification along with the derivations for cross-entropy loss. The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

Important note: Do check out my later version of these videos at Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8 . These have more content and also include some corrections. Check it out!

1. Elements of Neural Network and Deep Learning – Part 4
This presentation is a continuation of my 3rd presentation in which I derived the equations for a simple 3 layer Neural Network with 1 hidden layer. In this video presentation, I discuss step-by-step the derivations for a L-Layer, multi-unit Deep Learning Network, with any activation function g(z)


The implementations of L-Layer, multi-unit Deep Learning Network in vectorized R, Python and Octave are available in my post Deep Learning from first principles in Python, R and Octave – Part 3

2. Elements of Neural Network and Deep Learning – Part 5
This presentation discusses multi-class classification using the Softmax function. The detailed derivation for the Jacobian of the Softmax is discussed, and subsequently the derivative of cross-entropy loss is also discussed in detail. Finally the final set of equations for a Neural Network with multi-class classification is derived.


The corresponding implementations in vectorized R, Python and Octave are available in the following posts
a. Deep Learning from first principles in Python, R and Octave – Part 4
b. Deep Learning from first principles in Python, R and Octave – Part 5

To be continued. Watch this space!

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

Also see
1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
2. Big Data-2: Move into the big league:Graduate from R to SparkR
3. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
4. My TEDx talk on the “Internet of Things
5. Rock N’ Roll with Bluemix, Cloudant & NodeExpress
6. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables
7. Literacy in India – A deepR dive
8. Fun simulation of a Chain in Android

To see all posts click Index of Posts

Deep Learning from first principles in Python, R and Octave – Part 8

1. Introduction

You don’t understand anything until you learn it more than one way. Marvin Minsky
No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either. Marvin Minsky
A wealth of information creates a poverty of attention. Herbert Simon

This post, Deep Learning from first Principles in Python, R and Octave-Part8, is my final post in my Deep Learning from first principles series. In this post, I discuss and implement a key functionality needed while building Deep Learning networks viz. ‘Gradient Checking’. Gradient Checking is an important method to check the correctness of your implementation, specifically the forward propagation and the backward propagation cycles of an implementation. In addition I also discuss some tips for tuning hyper-parameters of a Deep Learning network based on my experience.

My post in this  ‘Deep Learning Series’ so far were
1. Deep Learning from first principles in Python, R and Octave – Part 1 In part 1, I implement logistic regression as a neural network in vectorized Python, R and Octave
2. Deep Learning from first principles in Python, R and Octave – Part 2 In the second part I implement a simple Neural network with just 1 hidden layer and a sigmoid activation output function
3. Deep Learning from first principles in Python, R and Octave – Part 3 The 3rd part implemented a multi-layer Deep Learning Network with sigmoid activation output in vectorized Python, R and Octave
4. Deep Learning from first principles in Python, R and Octave – Part 4 The 4th part deals with multi-class classification. Specifically, I derive the Jacobian of the Softmax function and enhance my L-Layer DL network to include Softmax output function in addition to Sigmoid activation
5. Deep Learning from first principles in Python, R and Octave – Part 5 This post uses the Softmax classifier implemented to classify MNIST digits using a L-layer Deep Learning network
6. Deep Learning from first principles in Python, R and Octave – Part 6 The 6th part adds more bells and whistles to my L-Layer DL network, by including different initialization types namely He and Xavier. Besides L2 Regularization and random dropout is added.
7. Deep Learning from first principles in Python, R and Octave – Part 7 The 7th part deals with Stochastic Gradient Descent Optimization methods including momentum, RMSProp and Adam
8. Deep Learning from first principles in Python, R and Octave – Part 8 – This post implements a critical function for ensuring the correctness of a L-Layer Deep Learning network implementation using Gradient Checking

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

You may also like my companion book “Practical Machine Learning with R and Python- Machine Learning in stereo” available in Amazon in paperback($9.99) and Kindle($6.99) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.

Gradient Checking is based on the following approach. One iteration of Gradient Descent computes and updates the parameters \theta by doing
\theta := \theta - \frac{d}{d\theta}J(\theta).
To minimize the cost we will need to minimize J(\theta)
Let g(\theta) be a function that computes the derivative \frac {d}{d\theta}J(\theta). Gradient Checking allows us to numerically evaluate the implementation of the function g(\theta) and verify its correctness.
We know the derivative of a function is given by
\frac {d}{d\theta}J(\theta) = lim->0 \frac {J(\theta +\epsilon) - J(\theta -\epsilon)} {2*\epsilon}
Note: The above derivative is based on the 2 sided derivative. The 1-sided derivative  is given by \frac {d}{d\theta}J(\theta) = lim->0 \frac {J(\theta +\epsilon) - J(\theta)} {\epsilon}
Gradient Checking is based on the 2-sided derivative because the error is of the order O(\epsilon^{2}) as opposed O(\epsilon) for the 1-sided derivative.
Hence Gradient Check uses the 2 sided derivative as follows.
g(\theta) = lim->0 \frac {J(\theta +\epsilon) - J(\theta -\epsilon)} {2*\epsilon}

In Gradient Check the following is done
A) Run one normal cycle of your implementation by doing the following
a) Compute the output activation by running 1 cycle of forward propagation
b) Compute the cost using the output activation
c) Compute the gradients using backpropation (grad)

B) Perform gradient check steps as below
a) Set \theta . Flatten all ‘weights’ and ‘bias’ matrices and vectors to a column vector.
b) Initialize \theta+ by bumping up \theta by adding \epsilon (\theta + \epsilon)
c) Perform forward propagation with \theta+
d) Compute cost with \theta+ i.e. J(\theta+)
e) Initialize  \theta- by bumping down \theta by subtracting \epsilon (\theta - \epsilon)
f) Perform forward propagation with \theta-
g) Compute cost with \theta- i.e.  J(\theta-)
h) Compute \frac {d} {d\theta} J(\theta) or ‘gradapprox’ as\frac {J(\theta+) - J(\theta-) } {2\epsilon} using the 2 sided derivative.
i) Compute L2norm or the Euclidean distance between ‘grad’ and ‘gradapprox’. If the
diference is of the order of 10^{-5} or 10^{-7} the implementation is correct. In the Deep Learning Specialization Prof Andrew Ng mentions that if the difference is of the order of 10^{-7} then the implementation is correct. A difference of 10^{-5} is also ok. Anything more than that is a cause of worry and you should look at your code more closely. To see more details click Gradient checking and advanced optimization

You can clone/download the code from Github at DeepLearning-Part8

After spending a better part of 3 days, I now realize how critical Gradient Check is for ensuring the correctness of you implementation. Initially I was getting very high difference and did not know how to understand the results or debug my implementation. After many hours of staring at the results, I  was able to finally arrive at a way, to localize issues in the implementation. In fact, I did catch a small bug in my Python code, which did not exist in the R and Octave implementations. I will demonstrate this below

1.1a Gradient Check – Sigmoid Activation – Python

import numpy as np
import matplotlib

exec(open("DLfunctions8.py").read())
exec(open("testcases.py").read())
#Load the data
train_X, train_Y, test_X, test_Y = load_dataset()
#Set layer dimensions
layersDimensions = [2,4,1]  
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
AL, caches, dropoutMat = forwardPropagationDeep(train_X, parameters, keep_prob=1, hiddenActivationFunc="relu",outputActivationFunc="sigmoid")
#Compute cost
cost = computeCost(AL, train_Y, outputActivationFunc="sigmoid") 
print("cost=",cost)
#Perform backprop and get gradients
gradients = backwardPropagationDeep(AL, train_Y, caches, dropoutMat, lambd=0, keep_prob=1,                                   hiddenActivationFunc="relu",outputActivationFunc="sigmoid")

epsilon = 1e-7
outputActivationFunc="sigmoid"

# Set-up variables
# Flatten parameters to a vector
parameters_values, _ = dictionary_to_vector(parameters)
#Flatten gradients to a vector
grad = gradients_to_vector(parameters,gradients)
num_parameters = parameters_values.shape[0]
#Initialize
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))

# Compute gradapprox using 2 sided derivative
for i in range(num_parameters):
    # Compute J_plus[i]. 
    thetaplus = np.copy(parameters_values)                                   
    thetaplus[i][0] = thetaplus[i][0] + epsilon                                 
    AL, caches, dropoutMat = forwardPropagationDeep(train_X, vector_to_dictionary(parameters,thetaplus), keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)
    J_plus[i] = computeCost(AL, train_Y, outputActivationFunc=outputActivationFunc) 
    
    
    # Compute J_minus[i]. 
    thetaminus = np.copy(parameters_values)                                     
    thetaminus[i][0] = thetaminus[i][0] - epsilon                                     
    AL, caches, dropoutMat  = forwardPropagationDeep(train_X, vector_to_dictionary(parameters,thetaminus), keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)     
    J_minus[i] = computeCost(AL, train_Y, outputActivationFunc=outputActivationFunc)                            
       
    # Compute gradapprox[i]   
    gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon)

# Compare gradapprox to backward propagation gradients by computing difference. 
numerator = np.linalg.norm(grad-gradapprox)                                           
denominator = np.linalg.norm(grad) +  np.linalg.norm(gradapprox)                                         
difference =  numerator/denominator                                        

#Check the difference
if difference > 1e-5:
    print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
    print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
print(difference)
print("\n")    
# The technique below can be used to identify 
# which of the parameters are in error
# Covert grad to dictionary
m=vector_to_dictionary2(parameters,grad)
print("Gradients from backprop")
print(m)
print("\n")
# Convert gradapprox to dictionary
n=vector_to_dictionary2(parameters,gradapprox)
print("Gradapprox from gradient check")
print(n)
## (300, 2)
## (300,)
## cost= 0.6931455556341791
## [92mYour backward propagation works perfectly fine! difference = 1.1604150683743381e-06[0m
## 1.1604150683743381e-06
## 
## 
## Gradients from backprop
## {'dW1': array([[-6.19439955e-06, -2.06438046e-06],
##        [-1.50165447e-05,  7.50401672e-05],
##        [ 1.33435433e-04,  1.74112143e-04],
##        [-3.40909024e-05, -1.38363681e-04]]), 'db1': array([[ 7.31333221e-07],
##        [ 7.98425950e-06],
##        [ 8.15002817e-08],
##        [-5.69821155e-08]]), 'dW2': array([[2.73416304e-04, 2.96061451e-04, 7.51837363e-05, 1.01257729e-04]]), 'db2': array([[-7.22232235e-06]])}
## 
## 
## Gradapprox from gradient check
## {'dW1': array([[-6.19448937e-06, -2.06501483e-06],
##        [-1.50168766e-05,  7.50399742e-05],
##        [ 1.33435485e-04,  1.74112391e-04],
##        [-3.40910633e-05, -1.38363765e-04]]), 'db1': array([[ 7.31081862e-07],
##        [ 7.98472399e-06],
##        [ 8.16013923e-08],
##        [-5.71764858e-08]]), 'dW2': array([[2.73416290e-04, 2.96061509e-04, 7.51831930e-05, 1.01257891e-04]]), 'db2': array([[-7.22255589e-06]])}

1.1b Gradient Check – Softmax Activation – Python (Error!!)

In the code below I show, how I managed to spot a bug in your implementation

import numpy as np
exec(open("DLfunctions8.py").read())
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j


# Plot the data
#plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
layersDimensions = [2,3,3]
y1=y.reshape(-1,1).T
train_X=X.T
train_Y=y1

parameters = initializeDeepModel(layersDimensions)
#Compute forward prop
AL, caches, dropoutMat = forwardPropagationDeep(train_X, parameters, keep_prob=1, 
                                                hiddenActivationFunc="relu",outputActivationFunc="softmax")
#Compute cost
cost = computeCost(AL, train_Y, outputActivationFunc="softmax") 
print("cost=",cost)
#Compute gradients from backprop
gradients = backwardPropagationDeep(AL, train_Y, caches, dropoutMat, lambd=0, keep_prob=1, 
                                    hiddenActivationFunc="relu",outputActivationFunc="softmax")
# Note the transpose of the gradients for Softmax has to be taken
L= len(parameters)//2
print(L)
gradients['dW'+str(L)]=gradients['dW'+str(L)].T
gradients['db'+str(L)]=gradients['db'+str(L)].T
# Perform gradient check
gradient_check_n(parameters, gradients, train_X, train_Y, epsilon = 1e-7,outputActivationFunc="softmax")

cost= 1.0986187818144022
2
There is a mistake in the backward propagation! difference = 0.7100295155692544
0.7100295155692544


Gradients from backprop
{'dW1': array([[ 0.00050125,  0.00045194],
       [ 0.00096392,  0.00039641],
       [-0.00014276, -0.00045639]]), 'db1': array([[ 0.00070082],
       [-0.00224399],
       [ 0.00052305]]), 'dW2': array([[-8.40953794e-05, -9.52657769e-04, -1.10269379e-04],
       [-7.45469382e-04,  9.49795606e-04,  2.29045434e-04],
       [ 8.29564761e-04,  2.86216305e-06, -1.18776055e-04]]), 
     'db2': array([[-0.00253808],
       [-0.00505508],
       [ 0.00759315]])}


Gradapprox from gradient check
{'dW1': array([[ 0.00050125,  0.00045194],
       [ 0.00096392,  0.00039641],
       [-0.00014276, -0.00045639]]), 'db1': array([[ 0.00070082],
       [-0.00224399],
       [ 0.00052305]]), 'dW2': array([[-8.40960634e-05, -9.52657953e-04, -1.10268461e-04],
       [-7.45469242e-04,  9.49796908e-04,  2.29045671e-04],
       [ 8.29565305e-04,  2.86104473e-06, -1.18776100e-04]]), 
     'db2': array([[-8.46211989e-06],
       [-1.68487446e-05],
       [ 2.53108645e-05]])}

Gradient Check gives a high value of the difference of 0.7100295. Inspecting the Gradients and Gradapprox we can see there is a very big discrepancy in db2. After I went over my code I discovered that I my computation in the function layerActivationBackward for Softmax was

 
   # Erroneous code
   if activationFunc == 'softmax':
        dW = 1/numtraining * np.dot(A_prev,dZ)
        db = np.sum(dZ, axis=0, keepdims=True)
        dA_prev = np.dot(dZ,W)
instead of
   # Fixed code
   if activationFunc == 'softmax':
        dW = 1/numtraining * np.dot(A_prev,dZ)
        db = 1/numtraining *  np.sum(dZ, axis=0, keepdims=True)
        dA_prev = np.dot(dZ,W)

After fixing this error when I ran Gradient Check I get

1.1c Gradient Check – Softmax Activation – Python (Corrected!!)

import numpy as np
exec(open("DLfunctions8.py").read())
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j


# Plot the data
#plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
layersDimensions = [2,3,3]
y1=y.reshape(-1,1).T
train_X=X.T
train_Y=y1
#Set layer dimensions
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
AL, caches, dropoutMat = forwardPropagationDeep(train_X, parameters, keep_prob=1, 
                                                hiddenActivationFunc="relu",outputActivationFunc="softmax")
#Compute cost
cost = computeCost(AL, train_Y, outputActivationFunc="softmax") 
print("cost=",cost)
#Compute gradients from backprop
gradients = backwardPropagationDeep(AL, train_Y, caches, dropoutMat, lambd=0, keep_prob=1, 
                                    hiddenActivationFunc="relu",outputActivationFunc="softmax")
# Note the transpose of the gradients for Softmax has to be taken
L= len(parameters)//2
print(L)
gradients['dW'+str(L)]=gradients['dW'+str(L)].T
gradients['db'+str(L)]=gradients['db'+str(L)].T
#Perform gradient check
gradient_check_n(parameters, gradients, train_X, train_Y, epsilon = 1e-7,outputActivationFunc="softmax")
## cost= 1.0986193170234435
## 2
## [92mYour backward propagation works perfectly fine! difference = 5.268804859613151e-07[0m
## 5.268804859613151e-07
## 
## 
## Gradients from backprop
## {'dW1': array([[ 0.00053206,  0.00038987],
##        [ 0.00093941,  0.00038077],
##        [-0.00012177, -0.0004692 ]]), 'db1': array([[ 0.00072662],
##        [-0.00210198],
##        [ 0.00046741]]), 'dW2': array([[-7.83441270e-05, -9.70179498e-04, -1.08715815e-04],
##        [-7.70175008e-04,  9.54478237e-04,  2.27690198e-04],
##        [ 8.48519135e-04,  1.57012608e-05, -1.18974383e-04]]), 'db2': array([[-8.52190476e-06],
##        [-1.69954294e-05],
##        [ 2.55173342e-05]])}
## 
## 
## Gradapprox from gradient check
## {'dW1': array([[ 0.00053206,  0.00038987],
##        [ 0.00093941,  0.00038077],
##        [-0.00012177, -0.0004692 ]]), 'db1': array([[ 0.00072662],
##        [-0.00210198],
##        [ 0.00046741]]), 'dW2': array([[-7.83439980e-05, -9.70180603e-04, -1.08716369e-04],
##        [-7.70173925e-04,  9.54478718e-04,  2.27690089e-04],
##        [ 8.48520143e-04,  1.57018842e-05, -1.18973720e-04]]), 'db2': array([[-8.52096171e-06],
##        [-1.69964043e-05],
##        [ 2.55162558e-05]])}

1.2a Gradient Check – Sigmoid Activation – R

source("DLfunctions8.R")
z <- as.matrix(read.csv("circles.csv",header=FALSE)) 

x <- z[,1:2]
y <- z[,3]
X <- t(x)
Y <- t(y)
#Set layer dimensions
layersDimensions = c(2,5,1)
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
retvals = forwardPropagationDeep(X, parameters,keep_prob=1, hiddenActivationFunc="relu",
                                 outputActivationFunc="sigmoid")
AL <- retvals[['AL']]
caches <- retvals[['caches']]
dropoutMat <- retvals[['dropoutMat']]
#Compute cost
cost <- computeCost(AL, Y,outputActivationFunc="sigmoid",
                    numClasses=layersDimensions[length(layersDimensions)])
print(cost)
## [1] 0.6931447
# Backward propagation.
gradients = backwardPropagationDeep(AL, Y, caches, dropoutMat, lambd=0, keep_prob=1, hiddenActivationFunc="relu",
                                    outputActivationFunc="sigmoid",numClasses=layersDimensions[length(layersDimensions)])
epsilon = 1e-07
outputActivationFunc="sigmoid"
#Convert parameter list to vector
parameters_values = list_to_vector(parameters)
#Convert gradient list to vector
grad = gradients_to_vector(parameters,gradients)
num_parameters = dim(parameters_values)[1]
#Initialize
J_plus = matrix(rep(0,num_parameters),
                nrow=num_parameters,ncol=1)
J_minus = matrix(rep(0,num_parameters),
                 nrow=num_parameters,ncol=1)
gradapprox = matrix(rep(0,num_parameters),
                    nrow=num_parameters,ncol=1)

# Compute gradapprox
for(i in 1:num_parameters){
    # Compute J_plus[i]. 
    thetaplus = parameters_values                                   
    thetaplus[i][1] = thetaplus[i][1] + epsilon                                 
    retvals = forwardPropagationDeep(X, vector_to_list(parameters,thetaplus), keep_prob=1, 
                                                hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)
    
    AL <- retvals[['AL']]
    J_plus[i] = computeCost(AL, Y, outputActivationFunc=outputActivationFunc) 


   # Compute J_minus[i]. 
    thetaminus = parameters_values                                     
    thetaminus[i][1] = thetaminus[i][1] - epsilon                                     
    retvals  = forwardPropagationDeep(X, vector_to_list(parameters,thetaminus), keep_prob=1, 
                                                 hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc)     
    AL <- retvals[['AL']]
    J_minus[i] = computeCost(AL, Y, outputActivationFunc=outputActivationFunc)                            

    # Compute gradapprox[i]   
    gradapprox[i] = (J_plus[i] - J_minus[i])/(2*epsilon)
}
# Compare gradapprox to backward propagation gradients by computing difference.
#Compute L2Norm
numerator = L2NormVec(grad-gradapprox)                                           
denominator = L2NormVec(grad) +  L2NormVec(gradapprox)                                         
difference =  numerator/denominator 
if(difference > 1e-5){
    cat("There is a mistake, the difference is too high",difference)
} else{
    cat("The implementations works perfectly", difference)
}
## The implementations works perfectly 1.279911e-06
# This can be used to check
print("Gradients from backprop")
## [1] "Gradients from backprop"
vector_to_list2(parameters,grad)
## $dW1
##               [,1]          [,2]
## [1,] -7.641588e-05 -3.427989e-07
## [2,] -9.049683e-06  6.906304e-05
## [3,]  3.401039e-06 -1.503914e-04
## [4,]  1.535226e-04 -1.686402e-04
## [5,] -6.029292e-05 -2.715648e-04
## 
## $db1
##               [,1]
## [1,]  6.930318e-06
## [2,] -3.283117e-05
## [3,]  1.310647e-05
## [4,] -3.454308e-05
## [5,] -2.331729e-08
## 
## $dW2
##              [,1]         [,2]         [,3]        [,4]         [,5]
## [1,] 0.0001612356 0.0001113475 0.0002435824 0.000362149 2.874116e-05
## 
## $db2
##              [,1]
## [1,] -1.16364e-05
print("Grad approx from gradient check")
## [1] "Grad approx from gradient check"
vector_to_list2(parameters,gradapprox)
## $dW1
##               [,1]          [,2]
## [1,] -7.641554e-05 -3.430589e-07
## [2,] -9.049428e-06  6.906253e-05
## [3,]  3.401168e-06 -1.503919e-04
## [4,]  1.535228e-04 -1.686401e-04
## [5,] -6.029288e-05 -2.715650e-04
## 
## $db1
##               [,1]
## [1,]  6.930012e-06
## [2,] -3.283096e-05
## [3,]  1.310618e-05
## [4,] -3.454237e-05
## [5,] -2.275957e-08
## 
## $dW2
##              [,1]         [,2]         [,3]         [,4]        [,5]
## [1,] 0.0001612355 0.0001113476 0.0002435829 0.0003621486 2.87409e-05
## 
## $db2
##              [,1]
## [1,] -1.16368e-05

1.2b Gradient Check – Softmax Activation – R

source("DLfunctions8.R")
Z <- as.matrix(read.csv("spiral.csv",header=FALSE)) 

# Setup the data
X <- Z[,1:2]
y <- Z[,3]
X <- t(X)
Y <- t(y)
layersDimensions = c(2, 3, 3)
parameters = initializeDeepModel(layersDimensions)
#Perform forward prop
retvals = forwardPropagationDeep(X, parameters,keep_prob=1, hiddenActivationFunc="relu",
                                 outputActivationFunc="softmax")
AL <- retvals[['AL']]
caches <- retvals[['caches']]
dropoutMat <- retvals[['dropoutMat']]
#Compute cost
cost <- computeCost(AL, Y,outputActivationFunc="softmax",
                    numClasses=layersDimensions[length(layersDimensions)])
print(cost)
## [1] 1.098618
# Backward propagation.
gradients = backwardPropagationDeep(AL, Y, caches, dropoutMat, lambd=0, keep_prob=1, hiddenActivationFunc="relu",
                                    outputActivationFunc="softmax",numClasses=layersDimensions[length(layersDimensions)])
# Need to take transpose of the last layer for Softmax
L=length(parameters)/2
gradients[[paste('dW',L,sep="")]]=t(gradients[[paste('dW',L,sep="")]])
gradients[[paste('db',L,sep="")]]=t(gradients[[paste('db',L,sep="")]])
#Perform gradient check
gradient_check_n(parameters, gradients, X, Y, 
                 epsilon = 1e-7,outputActivationFunc="softmax")
## The implementations works perfectly 3.903011e-07[1] "Gradients from backprop"
## $dW1
##              [,1]          [,2]
## [1,] 0.0007962367 -0.0001907606
## [2,] 0.0004444254  0.0010354412
## [3,] 0.0003078611  0.0007591255
## 
## $db1
##               [,1]
## [1,] -0.0017305136
## [2,]  0.0005393734
## [3,]  0.0012484550
## 
## $dW2
##               [,1]          [,2]          [,3]
## [1,] -3.515627e-04  7.487283e-04 -3.971656e-04
## [2,] -6.381521e-05 -1.257328e-06  6.507254e-05
## [3,] -1.719479e-04 -4.857264e-04  6.576743e-04
## 
## $db2
##               [,1]
## [1,] -5.536383e-06
## [2,] -1.824656e-05
## [3,]  2.378295e-05
## 
## [1] "Grad approx from gradient check"
## $dW1
##              [,1]          [,2]
## [1,] 0.0007962364 -0.0001907607
## [2,] 0.0004444256  0.0010354406
## [3,] 0.0003078615  0.0007591250
## 
## $db1
##               [,1]
## [1,] -0.0017305135
## [2,]  0.0005393741
## [3,]  0.0012484547
## 
## $dW2
##               [,1]          [,2]          [,3]
## [1,] -3.515632e-04  7.487277e-04 -3.971656e-04
## [2,] -6.381451e-05 -1.257883e-06  6.507239e-05
## [3,] -1.719469e-04 -4.857270e-04  6.576739e-04
## 
## $db2
##               [,1]
## [1,] -5.536682e-06
## [2,] -1.824652e-05
## [3,]  2.378209e-05

1.3a Gradient Check – Sigmoid Activation – Octave

source("DL8functions.m")
################## Circles
data=csvread("circles.csv");

X=data(:,1:2);
Y=data(:,3);
#Set layer dimensions
layersDimensions = [2 5  1]; #tanh=-0.5(ok), #relu=0.1 best!
[weights biases] = initializeDeepModel(layersDimensions);
#Perform forward prop
[AL forward_caches activation_caches droputMat] = forwardPropagationDeep(X', weights, biases,keep_prob=1, 
                 hiddenActivationFunc="relu", outputActivationFunc="sigmoid");
#Compute cost
cost = computeCost(AL, Y',outputActivationFunc=outputActivationFunc,numClasses=layersDimensions(size(layersDimensions)(2)));
disp(cost);
#Compute gradients from cost
[gradsDA gradsDW gradsDB] = backwardPropagationDeep(AL, Y', activation_caches,forward_caches, droputMat, lambd=0, keep_prob=1, 
                                 hiddenActivationFunc="relu", outputActivationFunc="sigmoid",
                                 numClasses=layersDimensions(size(layersDimensions)(2)));
epsilon = 1e-07;
outputActivationFunc="sigmoid";
# Convert paramter cell array to vector
parameters_values = cellArray_to_vector(weights, biases);
#Convert gradient cell array to vector
grad = gradients_to_vector(gradsDW,gradsDB);
num_parameters = size(parameters_values)(1);
#Initialize
J_plus = zeros(num_parameters, 1);
J_minus = zeros(num_parameters, 1);
gradapprox = zeros(num_parameters, 1);
# Compute gradapprox
for i = 1:num_parameters
    # Compute J_plus[i]. 
    thetaplus = parameters_values;                                   
    thetaplus(i,1) = thetaplus(i,1) + epsilon;      
    [weights1 biases1] =vector_to_cellArray(weights, biases,thetaplus);    
    [AL forward_caches activation_caches droputMat] = forwardPropagationDeep(X', weights1, biases1, keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc);
    J_plus(i) = computeCost(AL, Y', outputActivationFunc=outputActivationFunc); 
        
    # Compute J_minus[i]. 
    thetaminus = parameters_values;                                  
    thetaminus(i,1) = thetaminus(i,1) - epsilon ;    
    [weights1 biases1] = vector_to_cellArray(weights, biases,thetaminus);    
    [AL forward_caches activation_caches droputMat]  = forwardPropagationDeep(X',weights1, biases1, keep_prob=1, 
                                              hiddenActivationFunc="relu",outputActivationFunc=outputActivationFunc);     
    J_minus(i) = computeCost(AL, Y', outputActivationFunc=outputActivationFunc);                            
      
    # Compute gradapprox[i]   
    gradapprox(i) = (J_plus(i) - J_minus(i))/(2*epsilon);

endfor

#Compute L2Norm
numerator = L2NormVec(grad-gradapprox);                                           
denominator = L2NormVec(grad) +  L2NormVec(gradapprox);                                         
difference =  numerator/denominator;
disp(difference);
#Check difference
if difference > 1e-04
   printf("There is a mistake in the implementation ");
   disp(difference);
else
   printf("The implementation works perfectly");
      disp(difference);
endif
[weights1 biases1] = vector_to_cellArray(weights, biases,grad);  
printf("Gradients from back propagation"); 
disp(weights1);
disp(biases1); 
[weights2 biases2] = vector_to_cellArray(weights, biases,gradapprox); 
printf("Gradients from gradient check");
disp(weights2);
disp(biases2); 
0.69315
1.4893e-005
The implementation works perfectly 1.4893e-005
Gradients from back propagation
{
[1,1] =
5.0349e-005 2.1323e-005
8.8632e-007 1.8231e-006
9.3784e-005 1.0057e-004
1.0875e-004 -1.9529e-007
5.4502e-005 3.2721e-005
[1,2] =
1.0567e-005 6.0615e-005 4.6004e-005 1.3977e-004 1.0405e-004
}
{
[1,1] =
-1.8716e-005
1.1309e-009
4.7686e-005
1.2051e-005
-1.4612e-005
[1,2] = 9.5808e-006
}
Gradients from gradient check
{
[1,1] =
5.0348e-005 2.1320e-005
8.8485e-007 1.8219e-006
9.3784e-005 1.0057e-004
1.0875e-004 -1.9762e-007
5.4502e-005 3.2723e-005
[1,2] =
[1,2] =
1.0565e-005 6.0614e-005 4.6007e-005 1.3977e-004 1.0405e-004
}
{
[1,1] =
-1.8713e-005
1.1102e-009
4.7687e-005
1.2048e-005
-1.4609e-005
[1,2] = 9.5790e-006
}

1.3b Gradient Check – Softmax Activation – Octave

source("DL8functions.m")
data=csvread("spiral.csv");

# Setup the data
X=data(:,1:2);
Y=data(:,3);
# Set the layer dimensions
layersDimensions = [2 3  3]; 
[weights biases] = initializeDeepModel(layersDimensions);
# Run forward prop
[AL forward_caches activation_caches droputMat] = forwardPropagationDeep(X', weights, biases,keep_prob=1, 
                 hiddenActivationFunc="relu", outputActivationFunc="softmax");
# Compute cost
cost = computeCost(AL, Y',outputActivationFunc=outputActivationFunc,numClasses=layersDimensions(size(layersDimensions)(2)));
disp(cost);
# Perform backward prop
[gradsDA gradsDW gradsDB] = backwardPropagationDeep(AL, Y', activation_caches,forward_caches, droputMat, lambd=0, keep_prob=1, 
                                 hiddenActivationFunc="relu", outputActivationFunc="softmax",
                                 numClasses=layersDimensions(size(layersDimensions)(2)));

#Take transpose of last layer for Softmax                                
L=size(weights)(2);
gradsDW{L}= gradsDW{L}';
gradsDB{L}= gradsDB{L}';   
#Perform gradient check
difference= gradient_check_n(weights, biases, gradsDW,gradsDB, X, Y, epsilon = 1e-7,
                  outputActivationFunc="softmax",numClasses=layersDimensions(size(layersDimensions)(2)));
 1.0986
The implementation works perfectly  2.0021e-005
Gradients from back propagation
{
  [1,1] =
    -7.1590e-005  4.1375e-005
    -1.9494e-004  -5.2014e-005
    -1.4554e-004  5.1699e-005
  [1,2] =
    3.3129e-004  1.9806e-004  -1.5662e-005
    -4.9692e-004  -3.7756e-004  -8.2318e-005
    1.6562e-004  1.7950e-004  9.7980e-005
}
{
  [1,1] =
    -3.0856e-005
    -3.3321e-004
    -3.8197e-004
  [1,2] =
    1.2046e-006
    2.9259e-007
    -1.4972e-006
}
Gradients from gradient check
{
  [1,1] =
    -7.1586e-005  4.1377e-005
    -1.9494e-004  -5.2013e-005
    -1.4554e-004  5.1695e-005
    3.3129e-004  1.9806e-004  -1.5664e-005
    -4.9692e-004  -3.7756e-004  -8.2316e-005
    1.6562e-004  1.7950e-004  9.7979e-005
}
{
  [1,1] =
    -3.0852e-005
    -3.3321e-004
    -3.8197e-004
  [1,2] =
    1.1902e-006
    2.8200e-007
    -1.4644e-006
}

2.1 Tip for tuning hyperparameters

Deep Learning Networks come with a large number of hyper parameters which require tuning. The hyper parameters are

1. \alpha -learning rate
2. Number of layers
3. Number of hidden units
4. Number of iterations
5. Momentum – \beta – 0.9
6. RMSProp – \beta_{1} – 0.9
7. Adam – \beta_{1},\beta_{2} and \epsilon
8. learning rate decay
9. mini batch size
10. Initialization method – He, Xavier
11. Regularization

– Among the above the most critical is learning rate \alpha . Rather than just trying out random values, it may help to try out values on a logarithmic scale. So we could try out values -0.01,0.1,1.0,10 etc. If we find that the cost is between 0.01 and 0.1 we could use a technique similar to binary search or bisection, so we can try 0.01, 0.05. If we need to be bigger than 0.01 and 0.05 we could try 0.25  and then keep halving the distance etc.
– The performance of Momentum and RMSProp are very good and work well with values 0.9. Even with this, it is better to try out values of 1-\beta in the logarithmic range. So 1-\beta could 0.001,0.01,0.1 and hence \beta would be 0.999,0.99 or 0.9
– Increasing the number of hidden units or number of hidden layers need to be done gradually. I have noticed that increasing number of hidden layers heavily does not improve performance and sometimes degrades it.
– Sometimes, I tend to increase the number of iterations if I think I see a steady decrease in the cost for a certain learning rate
– It may also help to add learning rate decay if you see there is an oscillation while it decreases.
– Xavier and He initializations also help in a fast convergence and are worth trying out.

3.1 Final thoughts

As I come to a close in this Deep Learning Series from first principles in Python, R and Octave, I must admit that I learnt a lot in the process.

* Building a L-layer, vectorized Deep Learning Network in Python, R and Octave was extremely challenging but very rewarding
* One benefit of building vectorized versions in Python, R and Octave was that I was looking at each function that I was implementing thrice, and hence I was able to fix any bugs in any of the languages
* In addition since I built the generic L-Layer DL network with all the bells and whistles, layer by layer I further had an opportunity to look at all the functions in each successive post.
* Each language has its advantages and disadvantages. From the performance perspective I think Python is the best, followed by Octave and then R
* Interesting, I noticed that even if small bugs creep into your implementation, the DL network does learn and does generate a valid set of weights and biases, however this may not be an optimum solution. In one case of an inadvertent bug, I was not updating the weights in the final layer of the DL network. Yet, using all the other layers, the DL network was able to come with a reasonable solution (maybe like random dropout, remaining units can still learn the data!)
* Having said that, the Gradient Check method discussed and implemented in this post can be very useful in ironing out bugs.
Feel free to clone/download the code from Github at DeepLearning-Part8

Conclusion

These last couple of months when I was writing the posts and the also churning up the code in Python, R and Octave were  very hectic. There have been times when I found that implementations of some function to be extremely demanding and I almost felt like giving up. Other times, I have spent quite some time on an intractable DL network which would not respond to changes in hyper-parameters. All in all, it was a great learning experience. I would suggest that you start from my first post Deep Learning from first principles in Python, R and Octave-Part 1 and work your way up. Feel free to take the code apart and try out things. That is the only way you will learn.

Hope you had as much fun as I had. Stay tuned. I will be back!!!

Also see
1. My book ‘Practical Machine Learning with R and Python’ on Amazon
2. Revisiting crimes against women in India
3. Literacy in India – A deepR dive
4. Sixer – R package cricketr’s new Shiny avatar
5. Bend it like Bluemix, MongoDB using Auto-scale – Part 1!
6. Computer Vision: Ramblings on derivatives, histograms and contours
7. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
8. A closer look at “Robot Horse on a Trot” in Android

To see all post click Index of Posts

Deep Learning from first principles in Python, R and Octave – Part 5

Introduction

a. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
b. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
c. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

      Isaac Asimov's Three Laws of Robotics 

Any sufficiently advanced technology is indistinguishable from magic.

      Arthur C Clarke.   

In this 5th part on Deep Learning from first Principles in Python, R and Octave, I solve the MNIST data set of handwritten digits (shown below), from the basics. To do this, I construct a L-Layer, vectorized Deep Learning implementation in Python, R and Octave from scratch and classify the  MNIST data set. The MNIST training data set  contains 60000 handwritten digits from 0-9, and a test set of 10000 digits. MNIST, is a popular dataset for running Deep Learning tests, and has been rightfully termed as the ‘drosophila’ of Deep Learning, by none other than the venerable Prof Geoffrey Hinton.

The ‘Deep Learning from first principles in Python, R and Octave’ series, so far included  Part 1 , where I had implemented logistic regression as a simple Neural Network. Part 2 implemented the most elementary neural network with 1 hidden layer, but  with any number of activation units in that layer, and a sigmoid activation at the output layer.

This post, ‘Deep Learning from first principles in Python, R and Octave – Part 5’ largely builds upon Part3. in which I implemented a multi-layer Deep Learning network, with an arbitrary number of hidden layers and activation units per hidden layer and with the output layer was based on the sigmoid unit, for binary classification. In Part 4, I derive the Jacobian of a Softmax, the Cross entropy loss and the gradient equations for a multi-class Softmax classifier. I also  implement a simple Neural Network using Softmax classifications in Python, R and Octave.

In this post I combine Part 3 and Part 4 to to build a L-layer Deep Learning network, with arbitrary number of hidden layers and hidden units, which can do both binary (sigmoid) and multi-class (softmax) classification.

Note: A detailed discussion of the derivation for multi-class clasification can be seen in my video presentation Neural Networks 5

The generic, vectorized L-Layer Deep Learning Network implementations in Python, R and Octave can be cloned/downloaded from GitHub at DeepLearning-Part5. This implementation allows for arbitrary number of hidden layers and hidden layer units. The activation function at the hidden layers can be one of sigmoid, relu and tanh (will be adding leaky relu soon). The output activation can be used for binary classification with the ‘sigmoid’, or multi-class classification with ‘softmax’. Feel free to download and play around with the code!

I thought the exercise of combining the two parts(Part 3, & Part 4)  would be a breeze. But it was anything but. Incorporating a Softmax classifier into the generic L-Layer Deep Learning model was a challenge. Moreover I found that I could not use the gradient descent on 60,000 training samples as my laptop ran out of memory. So I had to implement Stochastic Gradient Descent (SGD) for Python, R and Octave. In addition, I had to also implement the numerically stable version of Softmax, as the softmax and its derivative would result in NaNs.

Numerically stable Softmax

The Softmax function S_{j} =\frac{e^{Z_{j}}}{\sum_{i}^{k}e^{Z_{i}}} can be numerically unstable because of the division of large exponentials.  To handle this problem we have to implement stable Softmax function as below

S_{j} =\frac{e^{Z_{j}}}{\sum_{i}^{k}e^{Z_{i}}}
S_{j} =\frac{e^{Z_{j}}}{\sum_{i}^{k}e^{Z_{i}}} = \frac{Ce^{Z_{j}}}{C\sum_{i}^{k}e^{Z_{i}}} = \frac{e^{Z_{j}+log(C)}}{\sum_{i}^{k}e^{Z_{i}+log(C)}}
Therefore S_{j}  = \frac{e^{Z_{j}+ D}}{\sum_{i}^{k}e^{Z_{i}+ D}}
Here ‘D’ can be anything. A common choice is
D=-max(Z_{1},Z_{2},... Z_{k})

Here is the stable Softmax implementation in Python

# A numerically stable Softmax implementation
def stableSoftmax(Z):  
    #Compute the softmax of vector x in a numerically stable way.
    shiftZ = Z.T - np.max(Z.T,axis=1).reshape(-1,1)
    exp_scores = np.exp(shiftZ)
    # normalize them for each example
    A = exp_scores / np.sum(exp_scores, axis=1, keepdims=True) 
    cache=Z
    return A,cache

While trying to create a L-Layer generic Deep Learning network in the 3 languages, I found it useful to ensure that the model executed correctly on smaller datasets.  You can run into numerous problems while setting up the matrices, which becomes extremely difficult to debug. So in this post, I run the model on 2 smaller data for sets used in my earlier posts(Part 3 & Part4) , in each of the languages, before running the generic model on MNIST.

Here is a fair warning. if you think you can dive directly into Deep Learning, with just some basic knowledge of Machine Learning, you are bound to run into serious issues. Moreover, your knowledge will be incomplete. It is essential that you have a good grasp of Machine and Statistical Learning, the different algorithms, the measures and metrics for selecting the models etc.It would help to be conversant with all the ML models, ML concepts, validation techniques, classification measures  etc. Check out the internet/books for background.

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

You may also like my companion book “Practical Machine Learning with R and Python:Second Edition- Machine Learning in stereo” available in Amazon in paperback($10.99) and Kindle($7.99/Rs449) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.

1. Random dataset with Sigmoid activation – Python

This random data with 9 clusters, was used in my post Deep Learning from first principles in Python, R and Octave – Part 3 , and was used to test the complete L-layer Deep Learning network with Sigmoid activation.

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import make_classification, make_blobs
exec(open("DLfunctions51.py").read()) # Cannot import in Rmd.
# Create a random data set with 9 centeres
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,cluster_std = 1.3, random_state =4)
                       
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Set the dimensions of L -layer DL network
layersDimensions = [2, 9, 9,1] #  4-layer model
# Execute DL network with hidden activation=relu and sigmoid output function
parameters = L_Layer_DeepModel(X2, Y2, layersDimensions, hiddenActivationFunc='relu', outputActivationFunc="sigmoid",learningRate = 0.3,num_iterations = 2500, print_cost = True)

2. Spiral dataset with Softmax activation – Python

The Spiral data was used in my post Deep Learning from first principles in Python, R and Octave – Part 4 and was used to test the complete L-layer Deep Learning network with multi-class Softmax activation at the output layer

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.datasets import make_classification, make_blobs
exec(open("DLfunctions51.py").read())

# Create an input data set - Taken from CS231n Convolutional Neural networks
# http://cs231n.github.io/neural-networks-case-study/
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j

X1=X.T
Y1=y.reshape(-1,1).T
numHidden=100 # No of hidden units in hidden layer
numFeats= 2 # dimensionality
numOutput = 3 # number of classes
# Set the dimensions of the layers
layersDimensions=[numFeats,numHidden,numOutput]
parameters = L_Layer_DeepModel(X1, Y1, layersDimensions, hiddenActivationFunc='relu', outputActivationFunc="softmax",learningRate = 0.6,num_iterations = 9000, print_cost = True)
## Cost after iteration 0: 1.098759
## Cost after iteration 1000: 0.112666
## Cost after iteration 2000: 0.044351
## Cost after iteration 3000: 0.027491
## Cost after iteration 4000: 0.021898
## Cost after iteration 5000: 0.019181
## Cost after iteration 6000: 0.017832
## Cost after iteration 7000: 0.017452
## Cost after iteration 8000: 0.017161

3. MNIST dataset with Softmax activation – Python

In the code below, I execute Stochastic Gradient Descent on the MNIST training data of 60000. I used a mini-batch size of 1000. Python takes about 40 minutes to crunch the data. In addition I also compute the Confusion Matrix and other metrics like Accuracy, Precision and Recall for the MNIST data set. I get an accuracy of 0.93 on the MNIST test set. This accuracy can be improved by choosing more hidden layers or more hidden units and possibly also tweaking the learning rate and the number of epochs.

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import math
from sklearn.datasets import make_classification, make_blobs
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
exec(open("DLfunctions51.py").read())
exec(open("load_mnist.py").read())
# Read the MNIST training and test sets
training=list(read(dataset='training',path=".\\mnist"))
test=list(read(dataset='testing',path=".\\mnist"))
# Create labels and pixel arrays
lbls=[]
pxls=[]
print(len(training))
#for i in range(len(training)):
for i in range(60000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)       
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T
# Set the dimensions of the layers. The MNIST data is 28x28 pixels= 784
# Hence input layer is 784. For the 10 digits the Softmax classifier
# has to handle 10 outputs
layersDimensions=[784, 15,9,10] # Works very well,lr=0.01,mini_batch =1000, total=20000
np.random.seed(1)
costs = []  
# Run Stochastic Gradient Descent with Learning Rate=0.01, mini batch size=1000
# number of epochs=3000
parameters = L_Layer_DeepModel_SGD(X1, Y1, layersDimensions, hiddenActivationFunc='relu', outputActivationFunc="softmax",learningRate = 0.01 ,mini_batch_size =1000, num_epochs = 3000, print_cost = True)

# Compute the Confusion Matrix on Training set
# Compute the training accuracy, precision and recall
proba=predict_proba(parameters, X1,outputActivationFunc="softmax")
#A2, cache = forwardPropagationDeep(X1, parameters)
#proba=np.argmax(A2, axis=0).reshape(-1,1)
a=confusion_matrix(Y1.T,proba)
print(a)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Y1.T, proba)))
print('Precision: {:.2f}'.format(precision_score(Y1.T, proba,average="micro")))
print('Recall: {:.2f}'.format(recall_score(Y1.T, proba,average="micro")))

# Read the test data
lbls=[]
pxls=[]
print(len(test))
for i in range(10000):
       l,p=test[i]
       lbls.append(l)
       pxls.append(p)
testLabels= np.array(lbls)
testPixels=np.array(pxls)       
ytest=testLabels.reshape(-1,1)
Xtest=testPixels.reshape(testPixels.shape[0],-1)
X1test=Xtest.T
Y1test=ytest.T

# Compute the Confusion Matrix on Test set
# Compute the test accuracy, precision and recall
probaTest=predict_proba(parameters, X1test,outputActivationFunc="softmax")
#A2, cache = forwardPropagationDeep(X1, parameters)
#proba=np.argmax(A2, axis=0).reshape(-1,1)
a=confusion_matrix(Y1test.T,probaTest)
print(a)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Y1test.T, probaTest)))
print('Precision: {:.2f}'.format(precision_score(Y1test.T, probaTest,average="micro")))
print('Recall: {:.2f}'.format(recall_score(Y1test.T, probaTest,average="micro")))
##1.  Confusion Matrix of Training set
       0     1    2    3    4    5    6    7    8    9
## [[5854    0   19    2   10    7    0    1   24    6]
##  [   1 6659   30   10    5    3    0   14   20    0]
##  [  20   24 5805   18    6   11    2   32   37    3]
##  [   5    4  175 5783    1   27    1   58   60   17]
##  [   1   21    9    0 5780    0    5    2   12   12]
##  [  29    9   21  224    6 4824   18   17  245   28]
##  [   5    4   22    1   32   12 5799    0   43    0]
##  [   3   13  148  154   18    3    0 5883    4   39]
##  [  11   34   30   21   13   16    4    7 5703   12]
##  [  10    4    1   32  135   14    1   92  134 5526]]

##2. Accuracy, Precision, Recall of  Training set
## Accuracy: 0.96
## Precision: 0.96
## Recall: 0.96

##3. Confusion Matrix of Test set
       0     1    2    3    4    5    6    7    8    9
## [[ 954    1    8    0    3    3    2    4    4    1]
##  [   0 1107    6    5    0    0    1    2   14    0]
##  [  11    7  957   10    5    0    5   20   16    1]
##  [   2    3   37  925    3   13    0    8   18    1]
##  [   2    6    1    1  944    0    7    3    4   14]
##  [  12    5    4   45    2  740   24    8   42   10]
##  [   8    4    4    2   16    9  903    0   12    0]
##  [   4   10   27   18    5    1    0  940    1   22]
##  [  11   13    6   13    9   10    7    2  900    3]
##  [   8    5    1    7   50    7    0   20   29  882]]
##4. Accuracy, Precision, Recall of  Training set
## Accuracy: 0.93
## Precision: 0.93
## Recall: 0.93

4. Random dataset with Sigmoid activation – R code

This is the random data set used in the Python code above which was saved as a CSV. The code is used to test a L -Layer DL network with Sigmoid Activation in R.

source("DLfunctions5.R")
# Read the random data set
z <- as.matrix(read.csv("data.csv",header=FALSE)) 
x <- z[,1:2]
y <- z[,3]
X <- t(x)
Y <- t(y)
# Set the dimensions of the  layer
layersDimensions = c(2, 9, 9,1)

# Run Gradient Descent on the data set with relu hidden unit activation 
# sigmoid activation unit in the output layer
retvals = L_Layer_DeepModel(X, Y, layersDimensions,
                            hiddenActivationFunc='relu',
                            outputActivationFunc="sigmoid",
                            learningRate = 0.3,
                            numIterations = 5000, 
                            print_cost = True)
#Plot the cost vs iterations
iterations <- seq(0,5000,1000)
costs=retvals$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs iterations") + xlab("Iterations") + ylab("Loss")

5. Spiral dataset with Softmax activation – R

The spiral data set used in the Python code above, is reused to test  multi-class classification with Softmax.

source("DLfunctions5.R")
Z <- as.matrix(read.csv("spiral.csv",header=FALSE)) 

# Setup the data
X <- Z[,1:2]
y <- Z[,3]
X <- t(X)
Y <- t(y)

# Initialize number of features, number of hidden units in hidden layer and
# number of classes
numFeats<-2 # No features
numHidden<-100 # No of hidden units
numOutput<-3 # No of classes

# Set the layer dimensions
layersDimensions = c(numFeats,numHidden,numOutput)

# Perform gradient descent with relu activation unit for hidden layer
# and softmax activation in the output
retvals = L_Layer_DeepModel(X, Y, layersDimensions,
                            hiddenActivationFunc='relu',
                            outputActivationFunc="softmax",
                            learningRate = 0.5,
                            numIterations = 9000, 
                            print_cost = True)
#Plot cost vs iterations
iterations <- seq(0,9000,1000)
costs=retvals$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
 ggtitle("Costs vs iterations") + xlab("Iterations") + ylab("Costs")

6. MNIST dataset with Softmax activation – R

The code below executes a L – Layer Deep Learning network with Softmax output activation, to classify the 10 handwritten digits from MNIST with Stochastic Gradient Descent. The entire 60000 data set was used to train the data. R takes almost 8 hours to process this data set with a mini-batch size of 1000.  The use of ‘for’ loops is limited to iterating through epochs, mini batches and for creating the mini batches itself. All other code is vectorized. Yet, it seems to crawl. Most likely the use of ‘lists’ in R, to return multiple values is performance intensive. Some day, I will try to profile the code, and see where the issue is. However the code works!

Having said that, the Confusion Matrix in R dumps a lot of interesting statistics! There is a bunch of statistical measures for each class. For e.g. the Balanced Accuracy for the digits ‘6’ and ‘9’ is around 50%. Looks like, the classifier is confused by the fact that 6 is inverted 9 and vice-versa. The accuracy on the Test data set is just around 75%. I could have played around with the number of layers, number of hidden units, learning rates, epochs etc to get a much higher accuracy. But since each test took about 8+ hours, I may work on this, some other day!

source("DLfunctions5.R")
source("mnist.R")
#Load the mnist data
load_mnist()
show_digit(train$x[2,])
#Set the layer dimensions
layersDimensions=c(784, 15,9, 10) # Works at 1500
x <- t(train$x)
X <- x[,1:60000]
y <-train$y
y1 <- y[1:60000]
y2 <- as.matrix(y1)
Y=t(y2)

# Subset 32768 random samples from MNIST 
permutation = c(sample(2^15))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 <- as.matrix(y1)
Y1=t(y2)

# Execute Stochastic Gradient Descent on the entire training set
# with Softmax activation
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
                            hiddenActivationFunc='relu',
                            outputActivationFunc="softmax",
                            learningRate = 0.05,
                            mini_batch_size = 512, 
                            num_epochs = 1, 
                            print_cost = True)

# Compute the Confusion Matrix
library(caret)
library(e1071)
predictions=predictProba(retvalsSGD[['parameters']], X,hiddenActivationFunc='relu',
                   outputActivationFunc="softmax")
confusionMatrix(predictions,Y)
# Confusion Matrix on the Training set
> confusionMatrix(predictions,Y)
Confusion Matrix and Statistics

          Reference
Prediction    0    1    2    3    4    5    6    7    8    9
         0 5738    1   21    5   16   17    7   15    9   43
         1    5 6632   21   24   25    3    2   33   13  392
         2   12   32 5747  106   25   28    3   27   44 4779
         3    0   27   12 5715    1   21    1   20    1   13
         4   10    5   21   18 5677    9   17   30   15  166
         5  142   21   96  136   93 5306 5884   43   60  413
         6    0    0    0    0    0    0    0    0    0    0
         7    6    9   13   13    3    4    0 6085    0   55
         8    8   12    7   43    1   32    2    7 5703   69
         9    2    3   20   71    1    1    2    5    6   19

Overall Statistics
                                          
               Accuracy : 0.777           
                 95% CI : (0.7737, 0.7804)
    No Information Rate : 0.1124          
    P-Value [Acc > NIR] : < 2.2e-16       
                                          
                  Kappa : 0.7524          
 Mcnemar's Test P-Value : NA              

Statistics by Class:

                     Class: 0 Class: 1 Class: 2 Class: 3 Class: 4 Class: 5 Class: 6
Sensitivity           0.96877   0.9837  0.96459  0.93215  0.97176  0.97879  0.00000
Specificity           0.99752   0.9903  0.90644  0.99822  0.99463  0.87380  1.00000
Pos Pred Value        0.97718   0.9276  0.53198  0.98348  0.95124  0.43513      NaN
Neg Pred Value        0.99658   0.9979  0.99571  0.99232  0.99695  0.99759  0.90137
Prevalence            0.09872   0.1124  0.09930  0.10218  0.09737  0.09035  0.09863
Detection Rate        0.09563   0.1105  0.09578  0.09525  0.09462  0.08843  0.00000
Detection Prevalence  0.09787   0.1192  0.18005  0.09685  0.09947  0.20323  0.00000
Balanced Accuracy     0.98314   0.9870  0.93551  0.96518  0.98319  0.92629  0.50000
                     Class: 7 Class: 8  Class: 9
Sensitivity            0.9713  0.97471 0.0031938
Specificity            0.9981  0.99666 0.9979464
Pos Pred Value         0.9834  0.96924 0.1461538
Neg Pred Value         0.9967  0.99727 0.9009521
Prevalence             0.1044  0.09752 0.0991500
Detection Rate         0.1014  0.09505 0.0003167
Detection Prevalence   0.1031  0.09807 0.0021667
Balanced Accuracy      0.9847  0.98568 0.5005701
# Confusion Matrix on the Training set xtest <- t(test$x) Xtest <- xtest[,1:10000] ytest <-test$y ytest1 <- ytest[1:10000] ytest2 <- as.matrix(ytest1) Ytest=t(ytest2)

Confusion Matrix and Statistics

          Reference
Prediction    0    1    2    3    4    5    6    7    8    9
         0  950    2    2    3    0    6    9    4    7    6
         1    3 1110    4    2    9    0    3   12    5   74
         2    2    6  965   21    9   14    5   16   12  789
         3    1    2    9  908    2   16    0   21    2    6
         4    0    1    9    5  938    1    8    6    8   39
         5   19    5   25   35   20  835  929    8   54   67
         6    0    0    0    0    0    0    0    0    0    0
         7    4    4    7   10    2    4    0  952    5    6
         8    1    5    8   14    2   16    2    3  876   21
         9    0    0    3   12    0    0    2    6    5    1

Overall Statistics
                                          
               Accuracy : 0.7535          
                 95% CI : (0.7449, 0.7619)
    No Information Rate : 0.1135          
    P-Value [Acc > NIR] : < 2.2e-16       
                                          
                  Kappa : 0.7262          
 Mcnemar's Test P-Value : NA              

Statistics by Class:

                     Class: 0 Class: 1 Class: 2 Class: 3 Class: 4 Class: 5 Class: 6
Sensitivity            0.9694   0.9780   0.9351   0.8990   0.9552   0.9361   0.0000
Specificity            0.9957   0.9874   0.9025   0.9934   0.9915   0.8724   1.0000
Pos Pred Value         0.9606   0.9083   0.5247   0.9390   0.9241   0.4181      NaN
Neg Pred Value         0.9967   0.9972   0.9918   0.9887   0.9951   0.9929   0.9042
Prevalence             0.0980   0.1135   0.1032   0.1010   0.0982   0.0892   0.0958
Detection Rate         0.0950   0.1110   0.0965   0.0908   0.0938   0.0835   0.0000
Detection Prevalence   0.0989   0.1222   0.1839   0.0967   0.1015   0.1997   0.0000
Balanced Accuracy      0.9825   0.9827   0.9188   0.9462   0.9733   0.9043   0.5000
                     Class: 7 Class: 8  Class: 9
Sensitivity            0.9261   0.8994 0.0009911
Specificity            0.9953   0.9920 0.9968858
Pos Pred Value         0.9577   0.9241 0.0344828
Neg Pred Value         0.9916   0.9892 0.8989068
Prevalence             0.1028   0.0974 0.1009000
Detection Rate         0.0952   0.0876 0.0001000
Detection Prevalence   0.0994   0.0948 0.0029000
Balanced Accuracy      0.9607   0.9457 0.4989384

7. Random dataset with Sigmoid activation – Octave

The Octave code below uses the random data set used by Python. The code below implements a L-Layer Deep Learning with Sigmoid Activation.


source("DL5functions.m")
# Read the data
data=csvread("data.csv");

X=data(:,1:2);
Y=data(:,3);
#Set the layer dimensions
layersDimensions = [2 9 7  1]; #tanh=-0.5(ok), #relu=0.1 best!
# Perform gradient descent 
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
                               hiddenActivationFunc='relu', 
                               outputActivationFunc="sigmoid",
                               learningRate = 0.1,
                               numIterations = 10000);
# Plot cost vs iterations
plotCostVsIterations(10000,costs);       

8. Spiral dataset with Softmax activation – Octave

The  code below uses the spiral data set used by Python above. The code below implements a L-Layer Deep Learning with Softmax Activation.

# Read the data
data=csvread("spiral.csv");

# Setup the data
X=data(:,1:2);
Y=data(:,3);

# Set the number of features, number of hidden units in hidden layer and number of classess
numFeats=2; #No features
numHidden=100; # No of hidden units
numOutput=3; # No of  classes
# Set the layer dimensions
layersDimensions = [numFeats numHidden  numOutput];  
#Perform gradient descent with softmax activation unit
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
                               hiddenActivationFunc='relu', 
                               outputActivationFunc="softmax",
                               learningRate = 0.1,
                               numIterations = 10000); 

9. MNIST dataset with Softmax activation – Octave

The code below implements a L-Layer Deep Learning Network in Octave with Softmax output activation unit, for classifying the 10 handwritten digits in the MNIST dataset. Unfortunately, Octave can only index to around 10000 training at a time,  and I was getting an error ‘error: out of memory or dimension too large for Octave’s index type error: called from…’, when I tried to create a batch size of 20000.  So I had to come with a work around to create a batch size of 10000 (randomly) and then use a mini-batch of 1000 samples and execute Stochastic Gradient Descent. The performance was good. Octave takes about 15 minutes, on a batch size of 10000 and a mini batch of 1000.

I thought if the performance was not good, I could iterate through these random batches and refining the gradients as follows

# Pseudo code that could be used since Octave only allows 10K batches
# at a time
# Randomly create weights
[weights biases] = initialize_weights()
for i=1:k
    # Create a random permutation and create a random batch
    permutation = randperm(10000);
    X=trainX(permutation,:);
    Y=trainY(permutation,:);
    # Compute weights from SGD and update weights in the next batch update
    [weights biases costs]=L_Layer_DeepModel_SGD(X,Y,mini_bactch=1000,weights, biases,...);
    ...
endfor
# Load the MNIST data
load('./mnist/mnist.txt.gz'); 
#Create a random permutatation from 60K
permutation = randperm(10000);
disp(length(permutation));

# Use this 10K as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];

# Run Stochastic Gradient descent with batch size=10K and mini_batch_size=1000
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
                       hiddenActivationFunc='relu', 
                       outputActivationFunc="softmax",
                       learningRate = 0.01,
                       mini_batch_size = 2000, num_epochs = 5000);   

9. Final thoughts

Here are some of my final thoughts after working on Python, R and Octave in this series and in other projects
1. Python, with its highly optimized numpy library, is ideally suited for creating Deep Learning Models, which have a lot of matrix manipulations. Python is a real workhorse when it comes to Deep Learning computations.
2. R is somewhat clunky in comparison to its cousin Python in handling matrices or in returning multiple values. But R’s statistical libraries, dplyr, and ggplot are really superior to the Python peers. Also, I find R handles  dataframes,  much better than Python.
3. Octave is a no-nonsense,minimalist language which is very efficient in handling matrices. It is ideally suited for implementing Machine Learning and Deep Learning from scratch. But Octave has its problems and cannot handle large matrix sizes, and also lacks the statistical libaries of R and Python. They possibly exist in its sibling, Matlab

Feel free to clone/download the code from  GitHub at DeepLearning-Part5.

Conclusion

Building a Deep Learning Network from scratch is quite challenging, time-consuming but nevertheless an exciting task.  While the statements in the different languages for manipulating matrices, summing up columns, finding columns which have ones don’t take more than a single statement, extreme care has to be taken to ensure that the statements work well for any dimension.  The lessons learnt from creating L -Layer Deep Learning network  are many and well worth it. Give it a try!

Hasta la vista! I’ll be back, so stick around!
Watch this space!

References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. CS231 Convolutional Neural Networks for Visual Recognition
4. Eli Bendersky’s Website – The Softmax function and its derivative

Also see
1. My book ‘Practical Machine Learning with R and Python’ on Amazon
2. Presentation on Wireless Technologies – Part 1
3. Exploring Quantum Gate operations with QCSimulator
4. What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
5. TWS-4: Gossip protocol: Epidemics and rumors to the rescue
6. cricketr plays the ODIs!
7. “Is it an animal? Is it an insect?” in Android
8. The 3rd paperback & kindle editions of my books on Cricket, now on Amazon
9. Deblurring with OpenCV: Weiner filter reloaded
10. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables

To see all posts click Index of Posts

 

Deep Learning from first principles in Python, R and Octave – Part 4

In this 4th post of my series on Deep Learning from first principles in Python, R and Octave – Part 4, I explore the details of creating a multi-class classifier using the Softmax activation unit in a neural network. The earlier posts in this series were

1. Deep Learning from first principles in Python, R and Octave – Part 1. In this post I implemented logistic regression as a simple Neural Network in vectorized Python, R and Octave
2. Deep Learning from first principles in Python, R and Octave – Part 2. This 2nd part implemented the most elementary neural network with 1 hidden layer and any number of activation units in the hidden layer with sigmoid activation at the output layer
3. Deep Learning from first principles in Python, R and Octave – Part 3. The 3rd implemented a multi-layer Deep Learning network with an arbitrary number if hidden layers and activation units per hidden layer. The output layer was for binary classification which was based on the sigmoid unit. This multi-layer deep network was implemented in vectorized Python, R and Octave.

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

This 4th part takes a swing at multi-class classification and uses the Softmax as the activation unit in the output layer. Inclusion of the Softmax activation unit in the activation layer requires us to compute the derivative of Softmax, or rather the “Jacobian” of the Softmax function, besides also computing the log loss for this Softmax activation during back propagation. Since the derivation of the Jacobian of a Softmax and the computation of the Cross Entropy/log loss is very involved, I have implemented a basic neural network with just 1 hidden layer with the Softmax activation at the output layer. I also perform multi-class classification based on the ‘spiral’ data set from CS231n Convolutional Neural Networks Stanford course, to test the performance and correctness of the implementations in Python, R and Octave. You can clone download the code for the Python, R and Octave implementations from Github at Deep Learning – Part 4

Note: A detailed discussion of the derivation below can also be seen in my video presentation Neural Networks 5

The Softmax function takes an N dimensional vector as input and generates a N dimensional vector as output.
The Softmax function is given by
S_{j}= \frac{e_{j}}{\sum_{i}^{N}e_{k}}
There is a probabilistic interpretation of the Softmax, since the sum of the Softmax values of a set of vectors will always add up to 1, given that each Softmax value is divided by the total of all values.

As mentioned earlier, the Softmax takes a vector input and returns a vector of outputs.  For e.g. the Softmax of a vector a=[1, 3, 6]  is another vector S=[0.0063,0.0471,0.9464]. Notice that vector output is proportional to the input vector.  Also, taking the derivative of a vector by another vector, is known as the Jacobian. By the way, The Matrix Calculus You Need For Deep Learning by Terence Parr and Jeremy Howard, is very good paper that distills all the main mathematical concepts for Deep Learning in one place.

Let us take a simple 2 layered neural network with just 2 activation units in the hidden layer is shown below

Z_{1}^{1} =W_{11}^{1}x_{1} + W_{21}^{1}x_{2} + b_{1}^{1}
Z_{2}^{1} =W_{12}^{1}x_{1} + W_{22}^{1}x_{2} + b_{2}^{1}
and
A_{1}^{1} = g'(Z_{1}^{1})
A_{2}^{1} = g'(Z_{2}^{1})
where g'() is the activation unit in the hidden layer which can be a relu, sigmoid or a
tanh function

Note: The superscript denotes the layer. The above denotes the equation for layer 1
of the neural network. For layer 2 with the Softmax activation, the equations are
Z_{1}^{2} =W_{11}^{2}x_{1} + W_{21}^{2}x_{2} + b_{1}^{2}
Z_{2}^{2} =W_{12}^{2}x_{1} + W_{22}^{2}x_{2} + b_{2}^{2}
and
A_{1}^{2} = S(Z_{1}^{2})
A_{2}^{2} = S(Z_{2}^{2})
where S() is the Softmax activation function
S=\begin{pmatrix} S(Z_{1}^{2})\\ S(Z_{2}^{2}) \end{pmatrix}
S=\begin{pmatrix} \frac{e^{Z1}}{e^{Z1}+e^{Z2}}\\ \frac{e^{Z2}}{e^{Z1}+e^{Z2}} \end{pmatrix}

The Jacobian of the softmax ‘S’ is given by
\begin{pmatrix} \frac {\partial S_{1}}{\partial Z_{1}} & \frac {\partial S_{1}}{\partial Z_{2}}\\ \frac {\partial S_{2}}{\partial Z_{1}} & \frac {\partial S_{2}}{\partial Z_{2}} \end{pmatrix}
\begin{pmatrix} \frac{\partial}{\partial Z_{1}} \frac {e^{Z1}}{e^{Z1}+ e^{Z2}} & \frac{\partial}{\partial Z_{2}} \frac {e^{Z1}}{e^{Z1}+ e^{Z2}}\\ \frac{\partial}{\partial Z_{1}} \frac {e^{Z2}}{e^{Z1}+ e^{Z2}} & \frac{\partial}{\partial Z_{2}} \frac {e^{Z2}}{e^{Z1}+ e^{Z2}} \end{pmatrix}     – (A)

Now the ‘division-rule’  of derivatives is as follows. If u and v are functions of x, then
\frac{d}{dx} \frac {u}{v} =\frac {vdu -udv}{v^{2}}
Using this to compute each element of the above Jacobian matrix, we see that
when i=j we have
\frac {\partial}{\partial Z1}\frac{e^{Z1}}{e^{Z1}+e^{Z2}} = \frac {\sum e^{Z1} - e^{Z1^{2}}}{\sum ^{2}}
and when i \neq j
\frac {\partial}{\partial Z1}\frac{e^{Z2}}{e^{Z1}+e^{Z2}} = \frac {0 - e^{z1}e^{Z2}}{\sum ^{2}}
This is of the general form
\frac {\partial S_{j}}{\partial z_{i}} = S_{i}( 1-S_{j})  when i=j
and
\frac {\partial S_{j}}{\partial z_{i}} = -S_{i}S_{j}  when i \neq j
Note: Since the Softmax essentially gives the probability the following
notation is also used
\frac {\partial p_{j}}{\partial z_{i}} = p_{i}( 1-p_{j}) when i=j
and
\frac {\partial p_{j}}{\partial z_{i}} = -p_{i}p_{j} when i \neq j
If you throw the “Kronecker delta” into the equation, then the above equations can be expressed even more concisely as
\frac {\partial p_{j}}{\partial z_{i}} = p_{i} (\delta_{ij} - p_{j})
where \delta_{ij} = 1 when i=j and 0 when i \neq j

This reduces the Jacobian of the simple 2 output softmax vectors  equation (A) as
\begin{pmatrix} p_{1}(1-p_{1}) & -p_{1}p_{2} \\ -p_{2}p_{1} & p_{2}(1-p_{2}) \end{pmatrix}
The loss of Softmax is given by
L = -\sum y_{i} log(p_{i})
For the 2 valued Softmax output this is
\frac {dL}{dp1} = -\frac {y_{1}}{p_{1}}
\frac {dL}{dp2} = -\frac {y_{2}}{p_{2}}
Using the chain rule we can write
\frac {\partial L}{\partial w_{pq}} = \sum _{i}\frac {\partial L}{\partial p_{i}} \frac {\partial p_{i}}{\partial w_{pq}} (1)
and
\frac {\partial p_{i}}{\partial w_{pq}} = \sum _{k}\frac {\partial p_{i}}{\partial z_{k}} \frac {\partial z_{k}}{\partial w_{pq}} (2)
In expanded form this is
\frac {\partial L}{\partial w_{pq}} = \sum _{i}\frac {\partial L}{\partial p_{i}} \sum _{k}\frac {\partial p_{i}}{\partial z_{k}} \frac {\partial z_{k}}{\partial w_{pq}}
Also
\frac {\partial L}{\partial Z_{i}} =\sum _{i} \frac {\partial L}{\partial p} \frac {\partial p}{\partial Z_{i}}
Therefore
\frac {\partial L}{\partial Z_{1}} =\frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial Z_{1}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial Z_{1}}
\frac {\partial L}{\partial z_{1}}=-\frac {y1}{p1} p1(1-p1) - \frac {y2}{p2}*(-p_{2}p_{1})
Since
\frac {\partial p_{j}}{\partial z_{i}} = p_{i}( 1-p_{j}) when i=j
and
\frac {\partial p_{j}}{\partial z_{i}} = -p_{i}p_{j} when i \neq j
which simplifies to
\frac {\partial L}{\partial Z_{1}} = -y_{1} + y_{1}p_{1} + y_{2}p_{1} =
p_{1}\sum (y_{1} + y_2) - y_{1}
\frac {\partial L}{\partial Z_{1}}= p_{1} - y_{1}
Since
\sum_{i} y_{i} =1
Similarly
\frac {\partial L}{\partial Z_{2}} =\frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial Z_{2}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial Z_{2}}
\frac {\partial L}{\partial z_{2}}=-\frac {y1}{p1}*(p_{1}p_{2}) - \frac {y2}{p2}*p_{2}(1-p_{2})
y_{1}p_{2} + y_{2}p_{2} - y_{2}
\frac {\partial L}{\partial Z_{2}} =p_{2}\sum (y_{1} + y_2) - y_{2}\\ = p_{2} - y_{2}
In general this is of the form
\frac {\partial L}{\partial z_{i}} = p_{i} -y_{i}
For e.g if the probabilities computed were p=[0.1, 0.7, 0.2] then this implies that the class with probability 0.7 is the likely class. This would imply that the ‘One hot encoding’ for  yi  would be yi=[0,1,0] therefore the gradient pi-yi = [0.1,-0.3,0.2]

<strong>Note: Further, we could extend this derivation for a Softmax activation output that outputs 3 classes
S=\begin{pmatrix} \frac{e^{z1}}{e^{z1}+e^{z2}+e^{z3}}\\ \frac{e^{z2}}{e^{z1}+e^{z2}+e^{z3}} \\ \frac{e^{z3}}{e^{z1}+e^{z2}+e^{z3}} \end{pmatrix}

We could derive
\frac {\partial L}{\partial z1}= \frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial z_{1}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial z_{1}} +\frac {\partial L}{\partial p_{3}} \frac {\partial p_{3}}{\partial z_{1}} which similarly reduces to
\frac {\partial L}{\partial z_{1}}=-\frac {y1}{p1} p1(1-p1) - \frac {y2}{p2}*(-p_{2}p_{1}) - \frac {y3}{p3}*(-p_{3}p_{1})
-y_{1}+ y_{1}p_{1} + y_{2}p_{1} + y_{3}p1 = p_{1}\sum (y_{1} + y_2 + y_3) - y_{1} = p_{1} - y_{1}
Interestingly, despite the lengthy derivations the final result is simple and intuitive!

As seen in my post ‘Deep Learning from first principles with Python, R and Octave – Part 3 the key equations for forward and backward propagation are

Forward propagation equations layer 1
Z_{1} = W_{1}X +b_{1}     and  A_{1} = g(Z_{1})
Forward propagation equations layer 1
Z_{2} = W_{2}A_{1} +b_{2}  and  A_{2} = S(Z_{2})

Using the result (A) in the back propagation equations below we have
Backward propagation equations layer 2
\partial L/\partial W_{2} =\partial L/\partial Z_{2}*A_{1}=(p_{2}-y_{2})*A_{1}
\partial L/\partial b_{2} =\partial L/\partial Z_{2}=p_{2}-y_{2}
\partial L/\partial A_{1} = \partial L/\partial Z_{2} * W_{2}=(p_{2}-y_{2})*W_{2}
Backward propagation equations layer 1
\partial L/\partial W_{1} =\partial L/\partial Z_{1} *A_{0}=(p_{1}-y_{1})*A_{0}
\partial L/\partial b_{1} =\partial L/\partial Z_{1}=(p_{1}-y_{1})

2.0 Spiral data set

As I mentioned earlier, I will be using the ‘spiral’ data from CS231n Convolutional Neural Networks to ensure that my vectorized implementations in Python, R and Octave are correct. Here is the ‘spiral’ data set.

import numpy as np
import matplotlib.pyplot as plt
import os
os.chdir("C:/junk/dl-4/dl-4")
exec(open("././DLfunctions41.py").read())

# Create an input data set - Taken from CS231n Convolutional Neural networks
# http://cs231n.github.io/neural-networks-case-study/
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j
# Plot the data
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.savefig("fig1.png", bbox_inches='tight')


The implementations of the vectorized Python, R and Octave code are shown diagrammatically below

2.1 Multi-class classification with Softmax – Python code

A simple 2 layer Neural network with a single hidden layer , with 100 Relu activation units in the hidden layer and the Softmax activation unit in the output layer is used for multi-class classification. This Deep Learning Network, plots the non-linear boundary of the 3 classes as shown below

import numpy as np
import matplotlib.pyplot as plt
import os
os.chdir("C:/junk/dl-4/dl-4")
exec(open("././DLfunctions41.py").read())

# Read the input data
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j
  
# Set the number of features, hidden units in hidden layer and number of classess
numHidden=100 # No of hidden units in hidden layer
numFeats= 2 # dimensionality
numOutput = 3 # number of classes

# Initialize the model
parameters=initializeModel(numFeats,numHidden,numOutput)
W1= parameters['W1']
b1= parameters['b1']
W2= parameters['W2']
b2= parameters['b2']

# Set the learning rate
learningRate=0.6 

# Initialize losses
losses=[]
# Perform Gradient descent
for i in range(10000):
    # Forward propagation through hidden layer with Relu units
    A1,cache1= layerActivationForward(X.T,W1,b1,'relu')
    
    # Forward propagation through output layer with Softmax
    A2,cache2 = layerActivationForward(A1,W2,b2,'softmax')
    
    # No of training examples
    numTraining = X.shape[0]
    # Compute log probs. Take the log prob of correct class based on output y
    correct_logprobs = -np.log(A2[range(numTraining),y])
    # Conpute loss
    loss = np.sum(correct_logprobs)/numTraining
    
    # Print the loss
    if i % 1000 == 0:
        print("iteration %d: loss %f" % (i, loss))
        losses.append(loss)

    dA=0

    # Backward  propagation through output layer with Softmax
    dA1,dW2,db2 = layerActivationBackward(dA, cache2, y, activationFunc='softmax')
    # Backward  propagation through hidden layer with Relu unit
    dA0,dW1,db1 = layerActivationBackward(dA1.T, cache1, y, activationFunc='relu')
    
    #Update paramaters with the learning rate
    W1 += -learningRate * dW1
    b1 += -learningRate * db1
    W2 += -learningRate * dW2.T
    b2 += -learningRate * db2.T

#Plot losses vs iterations  
i=np.arange(0,10000,1000)
plt.plot(i,losses)

plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.title('Losses vs Iterations')
plt.savefig("fig2.png", bbox="tight")

#Compute the multi-class Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

# We need to determine the predicted values from the learnt data
# Forward propagation through hidden layer with Relu units
A1,cache1= layerActivationForward(X.T,W1,b1,'relu')
    
# Forward propagation through output layer with Softmax
A2,cache2 = layerActivationForward(A1,W2,b2,'softmax')
#Compute predicted values from weights and biases
yhat=np.argmax(A2, axis=1)

a=confusion_matrix(y.T,yhat.T)
print("Multi-class Confusion Matrix")
print(a)
## iteration 0: loss 1.098507
## iteration 1000: loss 0.214611
## iteration 2000: loss 0.043622
## iteration 3000: loss 0.032525
## iteration 4000: loss 0.025108
## iteration 5000: loss 0.021365
## iteration 6000: loss 0.019046
## iteration 7000: loss 0.017475
## iteration 8000: loss 0.016359
## iteration 9000: loss 0.015703
## Multi-class Confusion Matrix
## [[ 99   1   0]
##  [  0 100   0]
##  [  0   1  99]]

Check out my compact and minimal book  “Practical Machine Learning with R and Python:Second edition- Machine Learning in stereo”  available in Amazon in paperback($10.99) and kindle($7.99) versions. My book includes implementations of key ML algorithms and associated measures and metrics. The book is ideal for anybody who is familiar with the concepts and would like a quick reference to the different ML algorithms that can be applied to problems and how to select the best model. Pick your copy today!!

2.2 Multi-class classification with Softmax – R code

The spiral data set created with Python was saved, and is used as the input with R code. The R Neural Network seems to perform much,much slower than both Python and Octave. Not sure why! Incidentally the computation of loss and the softmax derivative are identical for both R and Octave. yet R is much slower. To compute the softmax derivative I create matrices for the One Hot Encoded yi and then stack them before subtracting pi-yi. I am sure there is a more elegant and more efficient way to do this, much like Python. Any suggestions?

library(ggplot2)
library(dplyr)
library(RColorBrewer)
source("DLfunctions41.R")
# Read the spiral dataset
Z <- as.matrix(read.csv("spiral.csv",header=FALSE)) 
Z1=data.frame(Z)
#Plot the dataset
ggplot(Z1,aes(x=V1,y=V2,col=V3)) +geom_point() + 
  scale_colour_gradientn(colours = brewer.pal(10, "Spectral"))

# Setup the data
X <- Z[,1:2]
y <- Z[,3]
X1 <- t(X)
Y1 <- t(y)

# Initialize number of features, number of hidden units in hidden layer and
# number of classes
numFeats<-2 # No features
numHidden<-100 # No of hidden units
numOutput<-3 # No of classes

# Initialize model
parameters <-initializeModel(numFeats, numHidden,numOutput)

W1 <-parameters[['W1']]
b1 <-parameters[['b1']]
W2 <-parameters[['W2']]
b2 <-parameters[['b2']]

# Set the learning rate
learningRate <- 0.5
# Initialize losses
losses <- NULL
# Perform gradient descent
for(i in 0:9000){

# Forward propagation through hidden layer with Relu units
retvals <- layerActivationForward(X1,W1,b1,'relu')
A1 <- retvals[['A']]
cache1 <- retvals[['cache']]
forward_cache1 <- cache1[['forward_cache1']]
activation_cache <- cache1[['activation_cache']]

# Forward propagation through output layer with Softmax units
retvals = layerActivationForward(A1,W2,b2,'softmax')
A2 <- retvals[['A']]
cache2 <- retvals[['cache']]
forward_cache2 <- cache2[['forward_cache1']]
activation_cache2 <- cache2[['activation_cache']]

# No oftraining examples
numTraining <- dim(X)[1]
dA <-0

# Select the elements where the y values are 0, 1 or 2 and make a vector
a=c(A2[y==0,1],A2[y==1,2],A2[y==2,3])
# Take log
correct_probs = -log(a)
# Compute loss
loss= sum(correct_probs)/numTraining

if(i %% 1000 == 0){
sprintf("iteration %d: loss %f",i, loss)
print(loss)
}
# Backward propagation through output layer with Softmax units
retvals = layerActivationBackward(dA, cache2, y, activationFunc='softmax')
dA1 = retvals[['dA_prev']]
dW2= retvals[['dW']]
db2= retvals[['db']]
# Backward propagation through hidden layer with Relu units
retvals = layerActivationBackward(t(dA1), cache1, y, activationFunc='relu')
dA0 = retvals[['dA_prev']]
dW1= retvals[['dW']]
db1= retvals[['db']]

# Update parameters
W1 <- W1 - learningRate * dW1
b1 <- b1 - learningRate * db1
W2 <- W2 - learningRate * t(dW2)
b2 <- b2 - learningRate * t(db2)
}
## [1] 1.212487
## [1] 0.5740867
## [1] 0.4048824
## [1] 0.3561941
## [1] 0.2509576
## [1] 0.7351063
## [1] 0.2066114
## [1] 0.2065875
## [1] 0.2151943
## [1] 0.1318807

 

#Create iterations
iterations <- seq(0,10)
#df=data.frame(iterations,losses)
ggplot(df,aes(x=iterations,y=losses)) + geom_point() + geom_line(color="blue") +
    ggtitle("Losses vs iterations") + xlab("Iterations") + ylab("Loss")

plotDecisionBoundary(Z,W1,b1,W2,b2)



Multi-class Confusion Matrix

library(caret)
library(e1071)

# Forward propagation through hidden layer with Relu units
retvals <- layerActivationForward(X1,W1,b1,'relu')
A1 <- retvals[['A']]

# Forward propagation through output layer with Softmax units
retvals = layerActivationForward(A1,W2,b2,'softmax')
A2 <- retvals[['A']]
yhat <- apply(A2, 1,which.max) -1
Confusion Matrix and Statistics
          Reference
Prediction  0  1  2
         0 97  0  1
         1  2 96  4
         2  1  4 95

Overall Statistics                                        
               Accuracy : 0.96            
                 95% CI : (0.9312, 0.9792)
    No Information Rate : 0.3333          
    P-Value [Acc > NIR] : <2e-16          
                                          
                  Kappa : 0.94            
 Mcnemar's Test P-Value : 0.5724          
Statistics by Class:

                     Class: 0 Class: 1 Class: 2
Sensitivity            0.9700   0.9600   0.9500
Specificity            0.9950   0.9700   0.9750
Pos Pred Value         0.9898   0.9412   0.9500
Neg Pred Value         0.9851   0.9798   0.9750
Prevalence             0.3333   0.3333   0.3333
Detection Rate         0.3233   0.3200   0.3167
Detection Prevalence   0.3267   0.3400   0.3333
Balanced Accuracy      0.9825   0.9650   0.9625

My book “Practical Machine Learning with R and Python” includes the implementation for many Machine Learning algorithms and associated metrics. Pick up your copy today!

2.3 Multi-class classification with Softmax – Octave code

A 2 layer Neural network with the Softmax activation unit in the output layer is constructed in Octave. The same spiral data set is used for Octave also

source("DL41functions.m")
# Read the spiral data
data=csvread("spiral.csv");
# Setup the data
X=data(:,1:2);
Y=data(:,3);
# Set the number of features, number of hidden units in hidden layer and number of classes
numFeats=2; #No features
numHidden=100; # No of hidden units
numOutput=3; # No of classes
# Initialize model
[W1 b1 W2 b2] = initializeModel(numFeats,numHidden,numOutput);
# Initialize losses
losses=[]
#Initialize learningRate
learningRate=0.5;
for k =1:10000
# Forward propagation through hidden layer with Relu units
[A1,cache1 activation_cache1]= layerActivationForward(X',W1,b1,activationFunc ='relu');
# Forward propagation through output layer with Softmax units
[A2,cache2 activation_cache2] =
layerActivationForward(A1,W2,b2,activationFunc='softmax');
# No of training examples
numTraining = size(X)(1);
# Select rows where Y=0,1,and 2 and concatenate to a long vector
a=[A2(Y==0,1) ;A2(Y==1,2) ;A2(Y==2,3)];
#Select the correct column for log prob
correct_probs = -log(a);
#Compute log loss
loss= sum(correct_probs)/numTraining;
if(mod(k,1000) == 0)
disp(loss);
losses=[losses loss];
endif
dA=0;
# Backward propagation through output layer with Softmax units
[dA1 dW2 db2] = layerActivationBackward(dA, cache2, activation_cache2,Y,activationFunc='softmax');
# Backward propagation through hidden layer with Relu units
[dA0,dW1,db1] = layerActivationBackward(dA1', cache1, activation_cache1, Y, activationFunc='relu');
#Update parameters
W1 += -learningRate * dW1;
b1 += -learningRate * db1;
W2 += -learningRate * dW2';
b2 += -learningRate * db2';
endfor
# Plot Losses vs Iterations
iterations=0:1000:9000
plotCostVsIterations(iterations,losses)
# Plot the decision boundary
plotDecisionBoundary( X,Y,W1,b1,W2,b2)

The code for the Python, R and Octave implementations can be downloaded from Github at Deep Learning – Part 4

Conclusion

In this post I have implemented a 2 layer Neural Network with the Softmax classifier. In Part 3, I implemented a multi-layer Deep Learning Network. I intend to include the Softmax activation unit into the generalized multi-layer Deep Network along with the other activation units of sigmoid,tanh and relu.

Stick around, I’ll be back!!
Watch this space!

References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. CS231 Convolutional Neural Networks for Visual Recognition
4. Eli Bendersky’s Website – The Softmax function and its derivative
5. Cross Validated – Backpropagation with Softmax / Cross Entropy
6. Stackoverflow – CS231n: How to calculate gradient for Softmax loss function?
7. Math Stack Exchange – Derivative of Softmax
8. The Matrix Calculus for Deep Learning

You may like
1.My book ‘Practical Machine Learning with R and Python’ on Amazon
2. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
3. Deblurring with OpenCV: Weiner filter reloaded
4. A method to crowd source pothole marking on (Indian) roads
5. Rock N’ Roll with Bluemix, Cloudant & NodeExpress
6. Sea shells on the seashore
7. Design Principles of Scalable, Distributed Systems

To see all post click Index of posts