My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 6,7,8


This is the final set of presentations in my series ‘Elements of Neural Networks and Deep Learning’. This set follows the earlier 2 sets of presentations namely
1. My presentations on ‘Elements of Neural Networks & Deep Learning’ -Part1,2,3
2. My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 4,5

In this final set of presentations I discuss initialization methods, regularization techniques including dropout. Next I also discuss gradient descent optimization methods like momentum, rmsprop, adam etc. Lastly, I briefly also touch on hyper-parameter tuning approaches. The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

1. Elements of Neural Networks and Deep Learning – Part 6
This part discusses initialization methods specifically like He and Xavier. The presentation also focuses on how to prevent over-fitting using regularization. Lastly the dropout method of regularization is also discusses


The corresponding implementations in vectorized R, Python and Octave of the above discussed methods are available in my post Deep Learning from first principles in Python, R and Octave – Part 6

2. Elements of Neural Networks and Deep Learning – Part 7
This presentation introduces exponentially weighted moving average and shows how this is used in different approaches to gradient descent optimization. The key techniques discussed are learning rate decay, momentum method, rmsprop and adam.


The equivalent implementations of the gradient descent optimization techniques in R, Python and Octave can be seen in my post Deep Learning from first principles in Python, R and Octave – Part 7

3. Elements of Neural Networks and Deep Learning – Part 8
This last part touches upon hyper-parameter tuning in Deep Learning networks


This concludes this series of presentations on “Elements of Neural Networks and Deep Learning’

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and and in kindle version($9.99/Rs449).

See also
1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
2. Big Data-1: Move into the big league:Graduate from Python to Pyspark
3. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
4. Revisiting crimes against women in India
5. Introducing cricket package yorkr: Part 1- Beaten by sheer pace!
6. Deblurring with OpenCV: Weiner filter reloaded
7. Taking a closer look at Quantum gates and their operations

To see all posts click Index of posts

Deep Learning from first principles in Python, R and Octave – Part 4


In this 4th post of my series on Deep Learning from first principles in Python, R and Octave – Part 4, I explore the details of creating a multi-class classifier using the Softmax activation unit in a neural network. The earlier posts in this series were

1. Deep Learning from first principles in Python, R and Octave – Part 1. In this post I implemented logistic regression as a simple Neural Network in vectorized Python, R and Octave
2. Deep Learning from first principles in Python, R and Octave – Part 2. This 2nd part implemented the most elementary neural network with 1 hidden layer and any number of activation units in the hidden layer with sigmoid activation at the output layer
3. Deep Learning from first principles in Python, R and Octave – Part 3. The 3rd implemented a multi-layer Deep Learning network with an arbitrary number if hidden layers and activation units per hidden layer. The output layer was for binary classification which was based on the sigmoid unit. This multi-layer deep network was implemented in vectorized Python, R and Octave.

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

This 4th part takes a swing at multi-class classification and uses the Softmax as the activation unit in the output layer. Inclusion of the Softmax activation unit in the activation layer requires us to compute the derivative of Softmax, or rather the “Jacobian” of the Softmax function, besides also computing the log loss for this Softmax activation during back propagation. Since the derivation of the Jacobian of a Softmax and the computation of the Cross Entropy/log loss is very involved, I have implemented a basic neural network with just 1 hidden layer with the Softmax activation at the output layer. I also perform multi-class classification based on the ‘spiral’ data set from CS231n Convolutional Neural Networks Stanford course, to test the performance and correctness of the implementations in Python, R and Octave. You can clone download the code for the Python, R and Octave implementations from Github at Deep Learning – Part 4

Note: A detailed discussion of the derivation below can also be seen in my video presentation Neural Networks 5

The Softmax function takes an N dimensional vector as input and generates a N dimensional vector as output.
The Softmax function is given by
S_{j}= \frac{e_{j}}{\sum_{i}^{N}e_{k}}
There is a probabilistic interpretation of the Softmax, since the sum of the Softmax values of a set of vectors will always add up to 1, given that each Softmax value is divided by the total of all values.

As mentioned earlier, the Softmax takes a vector input and returns a vector of outputs.  For e.g. the Softmax of a vector a=[1, 3, 6]  is another vector S=[0.0063,0.0471,0.9464]. Notice that vector output is proportional to the input vector.  Also, taking the derivative of a vector by another vector, is known as the Jacobian. By the way, The Matrix Calculus You Need For Deep Learning by Terence Parr and Jeremy Howard, is very good paper that distills all the main mathematical concepts for Deep Learning in one place.

Let us take a simple 2 layered neural network with just 2 activation units in the hidden layer is shown below

Z_{1}^{1} =W_{11}^{1}x_{1} + W_{21}^{1}x_{2} + b_{1}^{1}
Z_{2}^{1} =W_{12}^{1}x_{1} + W_{22}^{1}x_{2} + b_{2}^{1}
and
A_{1}^{1} = g'(Z_{1}^{1})
A_{2}^{1} = g'(Z_{2}^{1})
where g'() is the activation unit in the hidden layer which can be a relu, sigmoid or a
tanh function

Note: The superscript denotes the layer. The above denotes the equation for layer 1
of the neural network. For layer 2 with the Softmax activation, the equations are
Z_{1}^{2} =W_{11}^{2}x_{1} + W_{21}^{2}x_{2} + b_{1}^{2}
Z_{2}^{2} =W_{12}^{2}x_{1} + W_{22}^{2}x_{2} + b_{2}^{2}
and
A_{1}^{2} = S(Z_{1}^{2})
A_{2}^{2} = S(Z_{2}^{2})
where S() is the Softmax activation function
S=\begin{pmatrix} S(Z_{1}^{2})\\ S(Z_{2}^{2}) \end{pmatrix}
S=\begin{pmatrix} \frac{e^{Z1}}{e^{Z1}+e^{Z2}}\\ \frac{e^{Z2}}{e^{Z1}+e^{Z2}} \end{pmatrix}

The Jacobian of the softmax ‘S’ is given by
\begin{pmatrix} \frac {\partial S_{1}}{\partial Z_{1}} & \frac {\partial S_{1}}{\partial Z_{2}}\\ \frac {\partial S_{2}}{\partial Z_{1}} & \frac {\partial S_{2}}{\partial Z_{2}} \end{pmatrix}
\begin{pmatrix} \frac{\partial}{\partial Z_{1}} \frac {e^{Z1}}{e^{Z1}+ e^{Z2}} & \frac{\partial}{\partial Z_{2}} \frac {e^{Z1}}{e^{Z1}+ e^{Z2}}\\ \frac{\partial}{\partial Z_{1}} \frac {e^{Z2}}{e^{Z1}+ e^{Z2}} & \frac{\partial}{\partial Z_{2}} \frac {e^{Z2}}{e^{Z1}+ e^{Z2}} \end{pmatrix}     – (A)

Now the ‘division-rule’  of derivatives is as follows. If u and v are functions of x, then
\frac{d}{dx} \frac {u}{v} =\frac {vdu -udv}{v^{2}}
Using this to compute each element of the above Jacobian matrix, we see that
when i=j we have
\frac {\partial}{\partial Z1}\frac{e^{Z1}}{e^{Z1}+e^{Z2}} = \frac {\sum e^{Z1} - e^{Z1^{2}}}{\sum ^{2}}
and when i \neq j
\frac {\partial}{\partial Z1}\frac{e^{Z2}}{e^{Z1}+e^{Z2}} = \frac {0 - e^{z1}e^{Z2}}{\sum ^{2}}
This is of the general form
\frac {\partial S_{j}}{\partial z_{i}} = S_{i}( 1-S_{j})  when i=j
and
\frac {\partial S_{j}}{\partial z_{i}} = -S_{i}S_{j}  when i \neq j
Note: Since the Softmax essentially gives the probability the following
notation is also used
\frac {\partial p_{j}}{\partial z_{i}} = p_{i}( 1-p_{j}) when i=j
and
\frac {\partial p_{j}}{\partial z_{i}} = -p_{i}p_{j} when i \neq j
If you throw the “Kronecker delta” into the equation, then the above equations can be expressed even more concisely as
\frac {\partial p_{j}}{\partial z_{i}} = p_{i} (\delta_{ij} - p_{j})
where \delta_{ij} = 1 when i=j and 0 when i \neq j

This reduces the Jacobian of the simple 2 output softmax vectors  equation (A) as
\begin{pmatrix} p_{1}(1-p_{1}) & -p_{1}p_{2} \\ -p_{2}p_{1} & p_{2}(1-p_{2}) \end{pmatrix}
The loss of Softmax is given by
L = -\sum y_{i} log(p_{i})
For the 2 valued Softmax output this is
\frac {dL}{dp1} = -\frac {y_{1}}{p_{1}}
\frac {dL}{dp2} = -\frac {y_{2}}{p_{2}}
Using the chain rule we can write
\frac {\partial L}{\partial w_{pq}} = \sum _{i}\frac {\partial L}{\partial p_{i}} \frac {\partial p_{i}}{\partial w_{pq}} (1)
and
\frac {\partial p_{i}}{\partial w_{pq}} = \sum _{k}\frac {\partial p_{i}}{\partial z_{k}} \frac {\partial z_{k}}{\partial w_{pq}} (2)
In expanded form this is
\frac {\partial L}{\partial w_{pq}} = \sum _{i}\frac {\partial L}{\partial p_{i}} \sum _{k}\frac {\partial p_{i}}{\partial z_{k}} \frac {\partial z_{k}}{\partial w_{pq}}
Also
\frac {\partial L}{\partial Z_{i}} =\sum _{i} \frac {\partial L}{\partial p} \frac {\partial p}{\partial Z_{i}}
Therefore
\frac {\partial L}{\partial Z_{1}} =\frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial Z_{1}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial Z_{1}}
\frac {\partial L}{\partial z_{1}}=-\frac {y1}{p1} p1(1-p1) - \frac {y2}{p2}*(-p_{2}p_{1})
Since
\frac {\partial p_{j}}{\partial z_{i}} = p_{i}( 1-p_{j}) when i=j
and
\frac {\partial p_{j}}{\partial z_{i}} = -p_{i}p_{j} when i \neq j
which simplifies to
\frac {\partial L}{\partial Z_{1}} = -y_{1} + y_{1}p_{1} + y_{2}p_{1} =
p_{1}\sum (y_{1} + y_2) - y_{1}
\frac {\partial L}{\partial Z_{1}}= p_{1} - y_{1}
Since
\sum_{i} y_{i} =1
Similarly
\frac {\partial L}{\partial Z_{2}} =\frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial Z_{2}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial Z_{2}}
\frac {\partial L}{\partial z_{2}}=-\frac {y1}{p1}*(p_{1}p_{2}) - \frac {y2}{p2}*p_{2}(1-p_{2})
y_{1}p_{2} + y_{2}p_{2} - y_{2}
\frac {\partial L}{\partial Z_{2}} =p_{2}\sum (y_{1} + y_2) - y_{2}\\ = p_{2} - y_{2}
In general this is of the form
\frac {\partial L}{\partial z_{i}} = p_{i} -y_{i}
For e.g if the probabilities computed were p=[0.1, 0.7, 0.2] then this implies that the class with probability 0.7 is the likely class. This would imply that the ‘One hot encoding’ for  yi  would be yi=[0,1,0] therefore the gradient pi-yi = [0.1,-0.3,0.2]

<strong>Note: Further, we could extend this derivation for a Softmax activation output that outputs 3 classes
S=\begin{pmatrix} \frac{e^{z1}}{e^{z1}+e^{z2}+e^{z3}}\\ \frac{e^{z2}}{e^{z1}+e^{z2}+e^{z3}} \\ \frac{e^{z3}}{e^{z1}+e^{z2}+e^{z3}} \end{pmatrix}

We could derive
\frac {\partial L}{\partial z1}= \frac {\partial L}{\partial p_{1}} \frac {\partial p_{1}}{\partial z_{1}} +\frac {\partial L}{\partial p_{2}} \frac {\partial p_{2}}{\partial z_{1}} +\frac {\partial L}{\partial p_{3}} \frac {\partial p_{3}}{\partial z_{1}} which similarly reduces to
\frac {\partial L}{\partial z_{1}}=-\frac {y1}{p1} p1(1-p1) - \frac {y2}{p2}*(-p_{2}p_{1}) - \frac {y3}{p3}*(-p_{3}p_{1})
-y_{1}+ y_{1}p_{1} + y_{2}p_{1} + y_{3}p1 = p_{1}\sum (y_{1} + y_2 + y_3) - y_{1} = p_{1} - y_{1}
Interestingly, despite the lengthy derivations the final result is simple and intuitive!

As seen in my post ‘Deep Learning from first principles with Python, R and Octave – Part 3 the key equations for forward and backward propagation are

Forward propagation equations layer 1
Z_{1} = W_{1}X +b_{1}     and  A_{1} = g(Z_{1})
Forward propagation equations layer 1
Z_{2} = W_{2}A_{1} +b_{2}  and  A_{2} = S(Z_{2})

Using the result (A) in the back propagation equations below we have
Backward propagation equations layer 2
\partial L/\partial W_{2} =\partial L/\partial Z_{2}*A_{1}=(p_{2}-y_{2})*A_{1}
\partial L/\partial b_{2} =\partial L/\partial Z_{2}=p_{2}-y_{2}
\partial L/\partial A_{1} = \partial L/\partial Z_{2} * W_{2}=(p_{2}-y_{2})*W_{2}
Backward propagation equations layer 1
\partial L/\partial W_{1} =\partial L/\partial Z_{1} *A_{0}=(p_{1}-y_{1})*A_{0}
\partial L/\partial b_{1} =\partial L/\partial Z_{1}=(p_{1}-y_{1})

2.0 Spiral data set

As I mentioned earlier, I will be using the ‘spiral’ data from CS231n Convolutional Neural Networks to ensure that my vectorized implementations in Python, R and Octave are correct. Here is the ‘spiral’ data set.

import numpy as np
import matplotlib.pyplot as plt
import os
os.chdir("C:/junk/dl-4/dl-4")
exec(open("././DLfunctions41.py").read())

# Create an input data set - Taken from CS231n Convolutional Neural networks
# http://cs231n.github.io/neural-networks-case-study/
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j
# Plot the data
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.Spectral)
plt.savefig("fig1.png", bbox_inches='tight')


The implementations of the vectorized Python, R and Octave code are shown diagrammatically below

2.1 Multi-class classification with Softmax – Python code

A simple 2 layer Neural network with a single hidden layer , with 100 Relu activation units in the hidden layer and the Softmax activation unit in the output layer is used for multi-class classification. This Deep Learning Network, plots the non-linear boundary of the 3 classes as shown below

import numpy as np
import matplotlib.pyplot as plt
import os
os.chdir("C:/junk/dl-4/dl-4")
exec(open("././DLfunctions41.py").read())

# Read the input data
N = 100 # number of points per class
D = 2 # dimensionality
K = 3 # number of classes
X = np.zeros((N*K,D)) # data matrix (each row = single example)
y = np.zeros(N*K, dtype='uint8') # class labels
for j in range(K):
  ix = range(N*j,N*(j+1))
  r = np.linspace(0.0,1,N) # radius
  t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2 # theta
  X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]
  y[ix] = j
  
# Set the number of features, hidden units in hidden layer and number of classess
numHidden=100 # No of hidden units in hidden layer
numFeats= 2 # dimensionality
numOutput = 3 # number of classes

# Initialize the model
parameters=initializeModel(numFeats,numHidden,numOutput)
W1= parameters['W1']
b1= parameters['b1']
W2= parameters['W2']
b2= parameters['b2']

# Set the learning rate
learningRate=0.6 

# Initialize losses
losses=[]
# Perform Gradient descent
for i in range(10000):
    # Forward propagation through hidden layer with Relu units
    A1,cache1= layerActivationForward(X.T,W1,b1,'relu')
    
    # Forward propagation through output layer with Softmax
    A2,cache2 = layerActivationForward(A1,W2,b2,'softmax')
    
    # No of training examples
    numTraining = X.shape[0]
    # Compute log probs. Take the log prob of correct class based on output y
    correct_logprobs = -np.log(A2[range(numTraining),y])
    # Conpute loss
    loss = np.sum(correct_logprobs)/numTraining
    
    # Print the loss
    if i % 1000 == 0:
        print("iteration %d: loss %f" % (i, loss))
        losses.append(loss)

    dA=0

    # Backward  propagation through output layer with Softmax
    dA1,dW2,db2 = layerActivationBackward(dA, cache2, y, activationFunc='softmax')
    # Backward  propagation through hidden layer with Relu unit
    dA0,dW1,db1 = layerActivationBackward(dA1.T, cache1, y, activationFunc='relu')
    
    #Update paramaters with the learning rate
    W1 += -learningRate * dW1
    b1 += -learningRate * db1
    W2 += -learningRate * dW2.T
    b2 += -learningRate * db2.T

#Plot losses vs iterations  
i=np.arange(0,10000,1000)
plt.plot(i,losses)

plt.xlabel('Iterations')
plt.ylabel('Loss')
plt.title('Losses vs Iterations')
plt.savefig("fig2.png", bbox="tight")

#Compute the multi-class Confusion Matrix
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score

# We need to determine the predicted values from the learnt data
# Forward propagation through hidden layer with Relu units
A1,cache1= layerActivationForward(X.T,W1,b1,'relu')
    
# Forward propagation through output layer with Softmax
A2,cache2 = layerActivationForward(A1,W2,b2,'softmax')
#Compute predicted values from weights and biases
yhat=np.argmax(A2, axis=1)

a=confusion_matrix(y.T,yhat.T)
print("Multi-class Confusion Matrix")
print(a)
## iteration 0: loss 1.098507
## iteration 1000: loss 0.214611
## iteration 2000: loss 0.043622
## iteration 3000: loss 0.032525
## iteration 4000: loss 0.025108
## iteration 5000: loss 0.021365
## iteration 6000: loss 0.019046
## iteration 7000: loss 0.017475
## iteration 8000: loss 0.016359
## iteration 9000: loss 0.015703
## Multi-class Confusion Matrix
## [[ 99   1   0]
##  [  0 100   0]
##  [  0   1  99]]

Check out my compact and minimal book  “Practical Machine Learning with R and Python:Second edition- Machine Learning in stereo”  available in Amazon in paperback($10.99) and kindle($7.99) versions. My book includes implementations of key ML algorithms and associated measures and metrics. The book is ideal for anybody who is familiar with the concepts and would like a quick reference to the different ML algorithms that can be applied to problems and how to select the best model. Pick your copy today!!

2.2 Multi-class classification with Softmax – R code

The spiral data set created with Python was saved, and is used as the input with R code. The R Neural Network seems to perform much,much slower than both Python and Octave. Not sure why! Incidentally the computation of loss and the softmax derivative are identical for both R and Octave. yet R is much slower. To compute the softmax derivative I create matrices for the One Hot Encoded yi and then stack them before subtracting pi-yi. I am sure there is a more elegant and more efficient way to do this, much like Python. Any suggestions?

library(ggplot2)
library(dplyr)
library(RColorBrewer)
source("DLfunctions41.R")
# Read the spiral dataset
Z <- as.matrix(read.csv("spiral.csv",header=FALSE)) 
Z1=data.frame(Z)
#Plot the dataset
ggplot(Z1,aes(x=V1,y=V2,col=V3)) +geom_point() + 
  scale_colour_gradientn(colours = brewer.pal(10, "Spectral"))

# Setup the data
X <- Z[,1:2]
y <- Z[,3]
X1 <- t(X)
Y1 <- t(y)

# Initialize number of features, number of hidden units in hidden layer and
# number of classes
numFeats<-2 # No features
numHidden<-100 # No of hidden units
numOutput<-3 # No of classes

# Initialize model
parameters <-initializeModel(numFeats, numHidden,numOutput)

W1 <-parameters[['W1']]
b1 <-parameters[['b1']]
W2 <-parameters[['W2']]
b2 <-parameters[['b2']]

# Set the learning rate
learningRate <- 0.5
# Initialize losses
losses <- NULL
# Perform gradient descent
for(i in 0:9000){

# Forward propagation through hidden layer with Relu units
retvals <- layerActivationForward(X1,W1,b1,'relu')
A1 <- retvals[['A']]
cache1 <- retvals[['cache']]
forward_cache1 <- cache1[['forward_cache1']]
activation_cache <- cache1[['activation_cache']]

# Forward propagation through output layer with Softmax units
retvals = layerActivationForward(A1,W2,b2,'softmax')
A2 <- retvals[['A']]
cache2 <- retvals[['cache']]
forward_cache2 <- cache2[['forward_cache1']]
activation_cache2 <- cache2[['activation_cache']]

# No oftraining examples
numTraining <- dim(X)[1]
dA <-0

# Select the elements where the y values are 0, 1 or 2 and make a vector
a=c(A2[y==0,1],A2[y==1,2],A2[y==2,3])
# Take log
correct_probs = -log(a)
# Compute loss
loss= sum(correct_probs)/numTraining

if(i %% 1000 == 0){
sprintf("iteration %d: loss %f",i, loss)
print(loss)
}
# Backward propagation through output layer with Softmax units
retvals = layerActivationBackward(dA, cache2, y, activationFunc='softmax')
dA1 = retvals[['dA_prev']]
dW2= retvals[['dW']]
db2= retvals[['db']]
# Backward propagation through hidden layer with Relu units
retvals = layerActivationBackward(t(dA1), cache1, y, activationFunc='relu')
dA0 = retvals[['dA_prev']]
dW1= retvals[['dW']]
db1= retvals[['db']]

# Update parameters
W1 <- W1 - learningRate * dW1
b1 <- b1 - learningRate * db1
W2 <- W2 - learningRate * t(dW2)
b2 <- b2 - learningRate * t(db2)
}
## [1] 1.212487
## [1] 0.5740867
## [1] 0.4048824
## [1] 0.3561941
## [1] 0.2509576
## [1] 0.7351063
## [1] 0.2066114
## [1] 0.2065875
## [1] 0.2151943
## [1] 0.1318807

 

#Create iterations
iterations <- seq(0,10)
#df=data.frame(iterations,losses)
ggplot(df,aes(x=iterations,y=losses)) + geom_point() + geom_line(color="blue") +
    ggtitle("Losses vs iterations") + xlab("Iterations") + ylab("Loss")

plotDecisionBoundary(Z,W1,b1,W2,b2)



Multi-class Confusion Matrix

library(caret)
library(e1071)

# Forward propagation through hidden layer with Relu units
retvals <- layerActivationForward(X1,W1,b1,'relu')
A1 <- retvals[['A']]

# Forward propagation through output layer with Softmax units
retvals = layerActivationForward(A1,W2,b2,'softmax')
A2 <- retvals[['A']]
yhat <- apply(A2, 1,which.max) -1
Confusion Matrix and Statistics
          Reference
Prediction  0  1  2
         0 97  0  1
         1  2 96  4
         2  1  4 95

Overall Statistics                                        
               Accuracy : 0.96            
                 95% CI : (0.9312, 0.9792)
    No Information Rate : 0.3333          
    P-Value [Acc > NIR] : <2e-16          
                                          
                  Kappa : 0.94            
 Mcnemar's Test P-Value : 0.5724          
Statistics by Class:

                     Class: 0 Class: 1 Class: 2
Sensitivity            0.9700   0.9600   0.9500
Specificity            0.9950   0.9700   0.9750
Pos Pred Value         0.9898   0.9412   0.9500
Neg Pred Value         0.9851   0.9798   0.9750
Prevalence             0.3333   0.3333   0.3333
Detection Rate         0.3233   0.3200   0.3167
Detection Prevalence   0.3267   0.3400   0.3333
Balanced Accuracy      0.9825   0.9650   0.9625

My book “Practical Machine Learning with R and Python” includes the implementation for many Machine Learning algorithms and associated metrics. Pick up your copy today!

2.3 Multi-class classification with Softmax – Octave code

A 2 layer Neural network with the Softmax activation unit in the output layer is constructed in Octave. The same spiral data set is used for Octave also

source("DL41functions.m")
# Read the spiral data
data=csvread("spiral.csv");
# Setup the data
X=data(:,1:2);
Y=data(:,3);
# Set the number of features, number of hidden units in hidden layer and number of classes
numFeats=2; #No features
numHidden=100; # No of hidden units
numOutput=3; # No of classes
# Initialize model
[W1 b1 W2 b2] = initializeModel(numFeats,numHidden,numOutput);
# Initialize losses
losses=[]
#Initialize learningRate
learningRate=0.5;
for k =1:10000
# Forward propagation through hidden layer with Relu units
[A1,cache1 activation_cache1]= layerActivationForward(X',W1,b1,activationFunc ='relu');
# Forward propagation through output layer with Softmax units
[A2,cache2 activation_cache2] =
layerActivationForward(A1,W2,b2,activationFunc='softmax');
# No of training examples
numTraining = size(X)(1);
# Select rows where Y=0,1,and 2 and concatenate to a long vector
a=[A2(Y==0,1) ;A2(Y==1,2) ;A2(Y==2,3)];
#Select the correct column for log prob
correct_probs = -log(a);
#Compute log loss
loss= sum(correct_probs)/numTraining;
if(mod(k,1000) == 0)
disp(loss);
losses=[losses loss];
endif
dA=0;
# Backward propagation through output layer with Softmax units
[dA1 dW2 db2] = layerActivationBackward(dA, cache2, activation_cache2,Y,activationFunc='softmax');
# Backward propagation through hidden layer with Relu units
[dA0,dW1,db1] = layerActivationBackward(dA1', cache1, activation_cache1, Y, activationFunc='relu');
#Update parameters
W1 += -learningRate * dW1;
b1 += -learningRate * db1;
W2 += -learningRate * dW2';
b2 += -learningRate * db2';
endfor
# Plot Losses vs Iterations
iterations=0:1000:9000
plotCostVsIterations(iterations,losses)
# Plot the decision boundary
plotDecisionBoundary( X,Y,W1,b1,W2,b2)

The code for the Python, R and Octave implementations can be downloaded from Github at Deep Learning – Part 4

Conclusion

In this post I have implemented a 2 layer Neural Network with the Softmax classifier. In Part 3, I implemented a multi-layer Deep Learning Network. I intend to include the Softmax activation unit into the generalized multi-layer Deep Network along with the other activation units of sigmoid,tanh and relu.

Stick around, I’ll be back!!
Watch this space!

References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. CS231 Convolutional Neural Networks for Visual Recognition
4. Eli Bendersky’s Website – The Softmax function and its derivative
5. Cross Validated – Backpropagation with Softmax / Cross Entropy
6. Stackoverflow – CS231n: How to calculate gradient for Softmax loss function?
7. Math Stack Exchange – Derivative of Softmax
8. The Matrix Calculus for Deep Learning

You may like
1.My book ‘Practical Machine Learning with R and Python’ on Amazon
2. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
3. Deblurring with OpenCV: Weiner filter reloaded
4. A method to crowd source pothole marking on (Indian) roads
5. Rock N’ Roll with Bluemix, Cloudant & NodeExpress
6. Sea shells on the seashore
7. Design Principles of Scalable, Distributed Systems

To see all post click Index of posts

Deep Learning from first principles in Python, R and Octave – Part 3


“Once upon a time, I, Chuang Tzu, dreamt I was a butterfly, fluttering hither and thither, to all intents and purposes a butterfly. I was conscious only of following my fancies as a butterfly, and was unconscious of my individuality as a man. Suddenly, I awoke, and there I lay, myself again. Now I do not know whether I was then a man dreaming I was a butterfly, or whether I am now a butterfly dreaming that I am a man.”
from The Brain: The Story of you – David Eagleman

“Thought is a great big vector of neural activity”
Prof Geoffrey Hinton

Introduction

This is the third part in my series on Deep Learning from first principles in Python, R and Octave. In the first part Deep Learning from first principles in Python, R and Octave-Part 1, I implemented logistic regression as a 2 layer neural network. The 2nd part Deep Learning from first principles in Python, R and Octave-Part 2, dealt with the implementation of 3 layer Neural Networks with 1 hidden layer to perform classification tasks, where the 2 classes cannot be separated by a linear boundary. In this third part, I implement a multi-layer, Deep Learning (DL) network of arbitrary depth (any number of hidden layers) and arbitrary height (any number of activation units in each hidden layer). The implementations of these Deep Learning networks, in all the 3 parts, are based on vectorized versions in Python, R and Octave. The implementation in the 3rd part is for a L-layer Deep Netwwork, but without any regularization, early stopping, momentum or learning rate adaptation techniques. However even the barebones multi-layer DL, is a handful and has enough hyperparameters to fine-tune and adjust.

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

The implementation of the vectorized L-layer Deep Learning network in Python, R and Octave were both exhausting, and exacting!! Keeping track of the indices, layer number and matrix dimensions required quite bit of focus. While the implementation was demanding, it was also very exciting to get the code to work. The trick was to be able to shift gears between the slight quirkiness between the languages. Here are some of challenges I faced.

1. Python and Octave allow multiple return values to be unpacked in a single statement. With R, unpacking multiple return values from a list, requires the list returned, to be unpacked separately. I did see that there is a package gsubfn, which does this.  I hope this feature becomes a base R feature.
2. Python and R allow dissimilar elements to be saved and returned from functions using dictionaries or lists respectively. However there is no real equivalent in Octave. The closest I got to this functionality in Octave, was the ‘cell array’. But the cell array can be accessed only by the index, and not with the key as in a Python dictionary or R list. This makes things just a bit more difficult in Octave.
3. Python and Octave include implicit broadcasting. In R, broadcasting is not implicit, but R has a nifty function, the sweep(), with which we can broadcast either by columns or by rows
4. The closest equivalent of Python’s dictionary, or R’s list, in Octave is the cell array. However I had to manage separate cell arrays for weights and biases and during gradient descent and separate gradients dW and dB
5. In Python the rank-1 numpy arrays can be annoying at times. This issue is not present in R and Octave.

Though the number of lines of code for Deep Learning functions in Python, R and Octave are about ~350 apiece, they have been some of the most difficult code I have implemented. The current vectorized implementation supports the relu, sigmoid and tanh activation functions as of now. I will be adding other activation functions like the ‘leaky relu’, ‘softmax’ and others, to the implementation in the weeks to come.

While testing with different hyper-parameters namely i) the number of hidden layers, ii) the number of activation units in each layer, iii) the activation function and iv) the number iterations, I found the L-layer Deep Learning Network to be very sensitive to these hyper-parameters. It is not easy to tune the parameters. Adding more hidden layers, or more units per layer, does not help and mostly results in gradient descent getting stuck in some local minima. It does take a fair amount of trial and error and very close observation on how the DL network performs for logical changes. We then can zero in on the most the optimal solution. Feel free to download/fork my code from Github DeepLearning-Part 3 and play around with the hyper-parameters for your own problems.

Derivation of a Multi Layer Deep Learning Network

Note: A detailed discussion of the derivation below is available in my video presentation Neural Network 4
Lets take a simple 3 layer Neural network with 3 hidden layers and an output layer

In the forward propagation cycle the equations are

Z_{1} = W_{1}A_{0} +b_{1}  and  A_{1} = g(Z_{1})
Z_{2} = W_{2}A_{1} +b_{2}  and  A_{2} = g(Z_{2})
Z_{3} = W_{3}A_{2} +b_{3}  and A_{3} = g(Z_{3})

The loss function is given by
L = -(ylogA3 + (1-y)log(1-A3))
and dL/dA3 = -(Y/A_{3} + (1-Y)/(1-A_{3}))

For a binary classification the output activation function is the sigmoid function given by
A_{3} = 1/(1+ e^{-Z3}). It can be shown that
dA_{3}/dZ_{3} = A_{3}(1-A_3) see equation 2 in Part 1

\partial L/\partial Z_{3} = \partial L/\partial A_{3}* \partial A_{3}/\partial Z_{3} = A3-Y see equation (f) in  Part 1
and since
\partial L/\partial A_{2} = \partial L/\partial Z_{3} * \partial Z_{3}/\partial A_{2} = (A_{3} -Y) * W_{3} because \partial Z_{3}/\partial A_{2} = W_{3} -(1a)
and \partial L/\partial Z_{2} =\partial L/\partial A_{2} * \partial A_{2}/\partial Z_{2} = (A_{3} -Y) * W_{3} *g'(Z_{2}) -(1b)
\partial L/\partial W_{2} = \partial L/\partial Z_{2} * A_{1} -(1c)
since \partial Z_{2}/\partial W_{2} = A_{1}
and
\partial L/\partial b_{2} = \partial L/\partial Z_{2} -(1d)
because
\partial Z_{2}/\partial b_{2} =1

Also

\partial L/\partial A_{1} =\partial L/\partial Z_{2} * \partial Z_{2}/\partial A_{1} = \partial L/\partial Z_{2} * W_{2}     – (2a)
\partial L/\partial Z_{1} =\partial L/\partial A_{1} * \partial A_{1}/\partial Z_{1} = \partial L/\partial A_{1} * W_{2} *g'(Z_{1})          – (2b)
\partial L/\partial W_{1} = \partial L/\partial Z_{1} * A_{0} – (2c)
\partial L/\partial b_{1} = \partial L/\partial Z_{1} – (2d)

Inspecting the above equations (1a – 1d & 2a-2d), our ‘Uber deep, bottomless’ brain  can easily discern the pattern in these equations. The equation for any layer ‘l’ is of the form
Z_{l} = W_{l}A_{l-1} +b_{l}     and  A_{l} = g(Z_{l})
The equation for the backward propagation have the general form
\partial L/\partial A_{l} = \partial L/\partial Z_{l+1} * W^{l+1}
\partial L/\partial Z_{l}=\partial L/\partial A_{l} *g'(Z_{l})
\partial L/\partial W_{l} =\partial L/\partial Z_{l} *A^{l-1}
\partial L/\partial b_{l} =\partial L/\partial Z_{l}

Some other important results The derivatives of the activation functions in the implemented Deep Learning network
g(z) = sigmoid(z) = 1/(1+e^{-z}) = a g’(z) = a(1-a) – See Part 1
g(z) = tanh(z) = a g’(z) = 1 - a^{2}
g(z) = relu(z) = z  when z>0 and 0 when z 0 and 0 when z <= 0
While it appears that there is a discontinuity for the derivative at 0 the small value at the discontinuity does not present a problem

The implementation of the multi layer vectorized Deep Learning Network for Python, R and Octave is included below. For all these implementations, initially I create the size and configuration of the the Deep Learning network with the layer dimennsions So for example layersDimension Vector ‘V’ of length L indicating ‘L’ layers where

V (in Python)= [v_{0}, v_{1}, v_{2}, … v_{L-1}]
V (in R)= c(v_{1}, v_{2}, v_{3} , … v_{L})
V (in Octave)= [ v_{1} v_{2} v_{3}v_{L}]

In all of these implementations the first element is the number of input features to the Deep Learning network and the last element is always a ‘sigmoid’ activation function since all the problems deal with binary classification.

The number of elements between the first and the last element are the number of hidden layers and the magnitude of each v_{i} is the number of activation units in each hidden layer, which is specified while actually executing the Deep Learning network using the function L_Layer_DeepModel(), in all the implementations Python, R and Octave

1a. Classification with Multi layer Deep Learning Network – Relu activation(Python)

In the code below a 4 layer Neural Network is trained to generate a non-linear boundary between the classes. In the code below the ‘Relu’ Activation function is used. The number of activation units in each layer is 9. The cost vs iterations is plotted in addition to the decision boundary. Further the accuracy, precision, recall and F1 score are also computed

import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets

#from DLfunctions import plot_decision_boundary
execfile("./DLfunctions34.py") # 
os.chdir("C:\\software\\DeepLearning-Posts\\part3")

# Create clusters of 2 classes
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
                       cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Set the dimensions of DL Network 
#  Below we have 
#  2 - 2 input features
#  9,9 - 2 hidden layers with 9 activation units per layer and
#  1 - 1 sigmoid activation unit in the output layer as this is a binary classification
# The activation in the hidden layer is the 'relu' specified in L_Layer_DeepModel

layersDimensions = [2, 9, 9,1] #  4-layer model
parameters = L_Layer_DeepModel(X2, Y2, layersDimensions,hiddenActivationFunc='relu', learning_rate = 0.3,num_iterations = 2500, fig="fig1.png")
#Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X2,Y2,str(0.3),"fig2.png")

# Compute the confusion matrix
yhat = predict(parameters,X2)
from sklearn.metrics import confusion_matrix
a=confusion_matrix(Y2.T,yhat.T)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Y2.T, yhat.T)))
print('Precision: {:.2f}'.format(precision_score(Y2.T, yhat.T)))
print('Recall: {:.2f}'.format(recall_score(Y2.T, yhat.T)))
print('F1: {:.2f}'.format(f1_score(Y2.T, yhat.T)))
## Accuracy: 0.90
## Precision: 0.91
## Recall: 0.87
## F1: 0.89

For more details on metrics like Accuracy, Recall, Precision etc. used in classification take a look at my post Practical Machine Learning with R and Python – Part 2. More details about these and other metrics besides implementation of the most common machine learning algorithms are available in my book My book ‘Practical Machine Learning with R and Python’ on Amazon

1b. Classification with Multi layer Deep Learning Network – Relu activation(R)

In the code below, binary classification is performed on the same data set as above using the Relu activation function. The DL network is same as above

library(ggplot2)
# Read the data
z <- as.matrix(read.csv("data.csv",header=FALSE)) 
x <- z[,1:2]
y <- z[,3]
X1 <- t(x)
Y1 <- t(y)

# Set the dimensions of the Deep Learning network
# No of input features =2, 2 hidden layers with 9 activation units and 1 output layer
layersDimensions = c(2, 9, 9,1)
# Execute the Deep Learning Neural Network
retvals = L_Layer_DeepModel(X1, Y1, layersDimensions,
                               hiddenActivationFunc='relu', 
                               learningRate = 0.3,
                               numIterations = 5000, 
                               print_cost = True)
library(ggplot2)
source("DLfunctions33.R")
# Get the computed costs
costs <- retvals[['costs']]
# Create a sequence of iterations
numIterations=5000
iterations <- seq(0,numIterations,by=1000)
df <-data.frame(iterations,costs)
# Plot the Costs vs number of iterations
ggplot(df,aes(x=iterations,y=costs)) + geom_point() +geom_line(color="blue") +
    xlab('No of iterations') + ylab('Cost') + ggtitle("Cost vs No of iterations")

# Plot the decision boundary
plotDecisionBoundary(z,retvals,hiddenActivationFunc="relu",0.3)

library(caret)
# Predict the output for the data values
yhat <-predict(retvals$parameters,X1,hiddenActivationFunc="relu")
yhat[yhat==FALSE]=0
yhat[yhat==TRUE]=1
# Compute the confusion matrix
confusionMatrix(yhat,Y1)
## Confusion Matrix and Statistics
## 
##           Reference
## Prediction   0   1
##          0 201  10
##          1  21 168
##                                           
##                Accuracy : 0.9225          
##                  95% CI : (0.8918, 0.9467)
##     No Information Rate : 0.555           
##     P-Value [Acc > NIR] : < 2e-16         
##                                           
##                   Kappa : 0.8441          
##  Mcnemar's Test P-Value : 0.07249         
##                                           
##             Sensitivity : 0.9054          
##             Specificity : 0.9438          
##          Pos Pred Value : 0.9526          
##          Neg Pred Value : 0.8889          
##              Prevalence : 0.5550          
##          Detection Rate : 0.5025          
##    Detection Prevalence : 0.5275          
##       Balanced Accuracy : 0.9246          
##                                           
##        'Positive' Class : 0               
## 

1c. Classification with Multi layer Deep Learning Network – Relu activation(Octave)

Included below is the code for performing classification. Incidentally Octave does not seem to have implemented the confusion matrix,  but confusionmat is available in Matlab.
# Read the data
data=csvread("data.csv");
X=data(:,1:2);
Y=data(:,3);
# Set layer dimensions
layersDimensions = [2 9 7 1] #tanh=-0.5(ok), #relu=0.1 best!
# Execute Deep Network
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
hiddenActivationFunc='relu',
learningRate = 0.1,
numIterations = 10000);
plotCostVsIterations(10000,costs);
plotDecisionBoundary(data,weights, biases,hiddenActivationFunc="tanh")


2a. Classification with Multi layer Deep Learning Network – Tanh activation(Python)

Below the Tanh activation function is used to perform the same classification. I found the Tanh activation required a simpler Neural Network of 3 layers.

# Tanh activation
import os
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import sklearn.linear_model

from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
import sklearn
import sklearn.datasets

#from DLfunctions import plot_decision_boundary
os.chdir("C:\\software\\DeepLearning-Posts\\part3")
execfile("./DLfunctions34.py") 
# Create the dataset
X1, Y1 = make_blobs(n_samples = 400, n_features = 2, centers = 9,
                       cluster_std = 1.3, random_state = 4)
#Create 2 classes
Y1=Y1.reshape(400,1)
Y1 = Y1 % 2
X2=X1.T
Y2=Y1.T
# Set the dimensions of the Neural Network
layersDimensions = [2, 4, 1] #  3-layer model
# Compute the DL network
parameters = L_Layer_DeepModel(X2, Y2, layersDimensions, hiddenActivationFunc='tanh', learning_rate = .5,num_iterations = 2500,fig="fig3.png")
#Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X2,Y2,str(0.5),"fig4.png")

2b. Classification with Multi layer Deep Learning Network – Tanh activation(R)

R performs better with a Tanh activation than the Relu as can be seen below

 #Set the dimensions of the Neural Network
layersDimensions = c(2, 9, 9,1)
library(ggplot2)
# Read the data
z <- as.matrix(read.csv("data.csv",header=FALSE)) 
x <- z[,1:2]
y <- z[,3]
X1 <- t(x)
Y1 <- t(y)
# Execute the Deep Model
retvals = L_Layer_DeepModel(X1, Y1, layersDimensions,
                            hiddenActivationFunc='tanh', 
                            learningRate = 0.3,
                            numIterations = 5000, 
                            print_cost = True)
# Get the costs
costs <- retvals[['costs']]
iterations <- seq(0,numIterations,by=1000)
df <-data.frame(iterations,costs)
# Plot Cost vs number of iterations
ggplot(df,aes(x=iterations,y=costs)) + geom_point() +geom_line(color="blue") +
    xlab('No of iterations') + ylab('Cost') + ggtitle("Cost vs No of iterations")

#Plot the decision boundary
plotDecisionBoundary(z,retvals,hiddenActivationFunc="tanh",0.3)

2c. Classification with Multi layer Deep Learning Network – Tanh activation(Octave)

The code below uses the   Tanh activation in the hidden layers for Octave
# Read the data
data=csvread("data.csv");
X=data(:,1:2);
Y=data(:,3);
# Set layer dimensions
layersDimensions = [2 9 7 1] #tanh=-0.5(ok), #relu=0.1 best!
# Execute Deep Network
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
hiddenActivationFunc='tanh',
learningRate = 0.1,
numIterations = 10000);
plotCostVsIterations(10000,costs);
plotDecisionBoundary(data,weights, biases,hiddenActivationFunc="tanh")


3. Bernoulli’s Lemniscate

To make things  more interesting, I create a 2D figure of the Bernoulli’s lemniscate to perform non-linear classification. The Lemniscate is given by the equation
(x^{2} + y^{2})^{2} = 2a^{2}*(x^{2}-y^{2})

3a. Classifying a lemniscate with Deep Learning Network – Relu activation(Python)

import os
import numpy as np 
import matplotlib.pyplot as plt
os.chdir("C:\\software\\DeepLearning-Posts\\part3")
execfile("./DLfunctions33.py") 
x1=np.random.uniform(0,10,2000).reshape(2000,1)
x2=np.random.uniform(0,10,2000).reshape(2000,1)

X=np.append(x1,x2,axis=1)
X.shape

# Create a subset of values where squared is <0,4. Perform ravel() to flatten this vector
# Create the equation
# (x^{2} + y^{2})^2 - 2a^2*(x^{2}-y^{2}) <= 0
a=np.power(np.power(X[:,0]-5,2) + np.power(X[:,1]-5,2),2)
b=np.power(X[:,0]-5,2) - np.power(X[:,1]-5,2)
c= a - (b*np.power(4,2)) <=0
Y=c.reshape(2000,1)
# Create a scatter plot of the lemniscate
plt.scatter(X[:,0], X[:,1], c=Y, marker= 'o', s=15,cmap="viridis")
Z=np.append(X,Y,axis=1)
plt.savefig("fig50.png",bbox_inches='tight')
plt.clf()

# Set the data for classification
X2=X.T
Y2=Y.T
# These settings work the best
# Set the Deep Learning layer dimensions for a Relu activation
layersDimensions = [2,7,4,1]
#Execute the DL network
parameters = L_Layer_DeepModel(X2, Y2, layersDimensions, hiddenActivationFunc='relu', learning_rate = 0.5,num_iterations = 10000, fig="fig5.png")
#Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X2, Y2,str(2.2),"fig6.png")

# Compute the Confusion matrix
yhat = predict(parameters,X2)
from sklearn.metrics import confusion_matrix
a=confusion_matrix(Y2.T,yhat.T)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Y2.T, yhat.T)))
print('Precision: {:.2f}'.format(precision_score(Y2.T, yhat.T)))
print('Recall: {:.2f}'.format(recall_score(Y2.T, yhat.T)))
print('F1: {:.2f}'.format(f1_score(Y2.T, yhat.T)))
## Accuracy: 0.93
## Precision: 0.77
## Recall: 0.76
## F1: 0.76

We could get better performance by tuning further. Do play around if you fork the code.
Note:: The lemniscate data is saved as a CSV and then read in R and also in Octave. I do this instead of recreating the lemniscate shape

3b. Classifying a lemniscate with Deep Learning Network – Relu activation(R code)

The R decision boundary for the Bernoulli’s lemniscate is shown below

Z <- as.matrix(read.csv("lemniscate.csv",header=FALSE))
Z1=data.frame(Z)
# Create a scatter plot of the lemniscate
ggplot(Z1,aes(x=V1,y=V2,col=V3)) +geom_point()
#Set the data for the DL network
X=Z[,1:2]
Y=Z[,3]

X1=t(X)
Y1=t(Y)

# Set the layer dimensions for the tanh activation function
layersDimensions = c(2,5,4,1)
# Execute the Deep Learning network with Tanh activation
retvals = L_Layer_DeepModel(X1, Y1, layersDimensions, 
                               hiddenActivationFunc='tanh', 
                               learningRate = 0.3,
                               numIterations = 20000, print_cost = True)
# Plot cost vs iteration
costs <- retvals[['costs']]
numIterations = 20000
iterations <- seq(0,numIterations,by=1000)
df <-data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() +geom_line(color="blue") +
    xlab('No of iterations') + ylab('Cost') + ggtitle("Cost vs No of iterations")

#Plot the decision boundary
plotDecisionBoundary(Z,retvals,hiddenActivationFunc="tanh",0.3)

3c. Classifying a lemniscate with Deep Learning Network – Relu activation(Octave code)

Octave is used to generate the non-linear lemniscate boundary.

# Read the data
data=csvread("lemniscate.csv");
X=data(:,1:2);
Y=data(:,3);
# Set the dimensions of the layers
layersDimensions = [2 9 7 1]
# Compute the DL network
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
hiddenActivationFunc='relu',
learningRate = 0.20,
numIterations = 10000);
plotCostVsIterations(10000,costs);
plotDecisionBoundary(data,weights, biases,hiddenActivationFunc="relu")


4a. Binary Classification using MNIST – Python code

Finally I perform a simple classification using the MNIST handwritten digits, which according to Prof Geoffrey Hinton is “the Drosophila of Deep Learning”.

The Python code for reading the MNIST data is taken from Alex Kesling’s github link MNIST.

In the Python code below, I perform a simple binary classification between the handwritten digit ‘5’ and ‘not 5’ which is all other digits. I will perform the proper classification of all digits using the  Softmax classifier some time later.

import os
import numpy as np 
import matplotlib.pyplot as plt
os.chdir("C:\\software\\DeepLearning-Posts\\part3")
execfile("./DLfunctions34.py") 
execfile("./load_mnist.py")
training=list(read(dataset='training',path="./mnist"))
test=list(read(dataset='testing',path="./mnist"))
lbls=[]
pxls=[]
print(len(training))

# Select the first 10000 training data and the labels
for i in range(10000):
       l,p=training[i]
       lbls.append(l)
       pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)   

#  Sey y=1  when labels == 5 and 0 otherwise
y=(labels==5).reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)

# Create the necessary feature and target variable
X1=X.T
Y1=y.T

# Create the layer dimensions. The number of features are 28 x 28 = 784 since the 28 x 28
# pixels is flattened to single vector of length 784.
layersDimensions=[784, 15,9,7,1] # Works very well
parameters = L_Layer_DeepModel(X1, Y1, layersDimensions, hiddenActivationFunc='relu', learning_rate = 0.1,num_iterations = 1000, fig="fig7.png")

# Test data
lbls1=[]
pxls1=[]
for i in range(800):
       l,p=test[i]
       lbls1.append(l)
       pxls1.append(p)
 
testLabels=np.array(lbls1)
testData=np.array(pxls1)

ytest=(testLabels==5).reshape(-1,1)
Xtest=testData.reshape(testData.shape[0],-1)
Xtest1=Xtest.T
Ytest1=ytest.T

yhat = predict(parameters,Xtest1)
from sklearn.metrics import confusion_matrix
a=confusion_matrix(Ytest1.T,yhat.T)
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy: {:.2f}'.format(accuracy_score(Ytest1.T, yhat.T)))
print('Precision: {:.2f}'.format(precision_score(Ytest1.T, yhat.T)))
print('Recall: {:.2f}'.format(recall_score(Ytest1.T, yhat.T)))
print('F1: {:.2f}'.format(f1_score(Ytest1.T, yhat.T)))

probs=predict_proba(parameters,Xtest1)
from sklearn.metrics import precision_recall_curve

precision, recall, thresholds = precision_recall_curve(Ytest1.T, probs.T)
closest_zero = np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.plot(precision, recall, label='Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.savefig("fig8.png",bbox_inches='tight')

## Accuracy: 0.99
## Precision: 0.96
## Recall: 0.89
## F1: 0.92

In addition to plotting the Cost vs Iterations, I also plot the Precision-Recall curve to show how the Precision and Recall, which are complementary to each other vary with respect to the other. To know more about Precision-Recall, please check my post Practical Machine Learning with R and Python – Part 4.

Check out my compact and minimal book  “Practical Machine Learning with R and Python:Second edition- Machine Learning in stereo”  available in Amazon in paperback($10.99) and kindle($7.99) versions. My book includes implementations of key ML algorithms and associated measures and metrics. The book is ideal for anybody who is familiar with the concepts and would like a quick reference to the different ML algorithms that can be applied to problems and how to select the best model. Pick your copy today!!

A physical copy of the book is much better than scrolling down a webpage. Personally, I tend to use my own book quite frequently to refer to R, Python constructs,  subsetting, machine Learning function calls and the necessary parameters etc. It is useless to commit any of this to memory, and a physical copy of a book is much easier to thumb through for the relevant code snippet. Pick up your copy today!

4b. Binary Classification using MNIST – R code

In the R code below the same binary classification of the digit ‘5’ and the ‘not 5’ is performed. The code to read and display the MNIST data is taken from Brendan O’ Connor’s github link at MNIST

source("mnist.R")
load_mnist()
#show_digit(train$x[2,]
layersDimensions=c(784, 7,7,3,1) # Works at 1500
x <- t(train$x)
# Choose only 5000 training data
x2 <- x[,1:5000]
y <-train$y
# Set labels for all digits that are 'not 5' to 0
y[y!=5] <- 0
# Set labels of digit 5 as 1
y[y==5] <- 1
# Set the data
y1 <- as.matrix(y)
y2 <- t(y1)
# Choose the 1st 5000 data
y3 <- y2[,1:5000]

#Execute the Deep Learning Model
retvals = L_Layer_DeepModel(x2, y3, layersDimensions, 
                               hiddenActivationFunc='tanh', 
                               learningRate = 0.3,
                               numIterations = 3000, print_cost = True)
# Plot cost vs iteration
costs <- retvals[['costs']]
numIterations = 3000
iterations <- seq(0,numIterations,by=1000)
df <-data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() +geom_line(color="blue") +
    xlab('No of iterations') + ylab('Cost') + ggtitle("Cost vs No of iterations")

# Compute probability scores
scores <- computeScores(retvals$parameters, x2,hiddenActivationFunc='relu')
a=y3==1
b=y3==0

# Compute probabilities of class 0 and class 1
class1=scores[a]
class0=scores[b]

# Plot ROC curve
pr <-pr.curve(scores.class0=class1,
        scores.class1=class0,
       curve=T)

plot(pr)

The AUC curve hugs the top left corner and hence the performance of the classifier is quite good.

4c. Binary Classification using MNIST – Octave code

This code to load MNIST data was taken from Daniel E blog.
Precision recall curves are available in Matlab but are yet to be implemented in Octave’s statistics package.

load('./mnist/mnist.txt.gz'); % load the dataset
# Subset the 'not 5' digits
a=(trainY != 5);
# Subset '5'
b=(trainY == 5);
#make a copy of trainY
#Set 'not 5' as 0 and '5' as 1
y=trainY;
y(a)=0;
y(b)=1;
X=trainX(1:5000,:);
Y=y(1:5000);
# Set the dimensions of layer
layersDimensions=[784, 7,7,3,1];
# Compute the DL network
[weights biases costs]=L_Layer_DeepModel(X', Y', layersDimensions,
hiddenActivationFunc='relu',
learningRate = 0.1,
numIterations = 5000);

Conclusion

It was quite a challenge coding a Deep Learning Network in Python, R and Octave. The Deep Learning network implementation, in this post,is the base Deep Learning network, without any of the regularization methods included. Here are some key learning that I got while playing with different multi-layer networks on different problems

a. Deep Learning Networks come with many levers, the hyper-parameters,
– learning rate
– activation unit
– number of hidden layers
– number of units per hidden layer
– number of iterations while performing gradient descent
b. Deep Networks are very sensitive. A change in any of the hyper-parameter makes it perform very differently
c. Initially I thought adding more hidden layers, or more units per hidden layer will make the DL network better at learning. On the contrary, there is a performance degradation after the optimal DL configuration
d. At a sub-optimal number of hidden layers or number of hidden units, gradient descent seems to get stuck at a local minima
e. There were occasions when the cost came down, only to increase slowly as the number of iterations were increased. Probably early stopping would have helped.
f. I also did come across situations of ‘exploding/vanishing gradient’, cost went to Inf/-Inf. Here I would think inclusion of ‘momentum method’ would have helped

I intend to add the additional hyper-parameters of L1, L2 regularization, momentum method, early stopping etc. into the code in my future posts.
Feel free to fork/clone the code from Github Deep Learning – Part 3, and take the DL network apart and play around with it.

I will be continuing this series with more hyper-parameters to handle vanishing and exploding gradients, early stopping and regularization in the weeks to come. I also intend to add some more activation functions to this basic Multi-Layer Network.
Hang around, there are more exciting things to come.

Watch this space!

References
1. Deep Learning Specialization
2. Neural Networks for Machine Learning
3. Deep Learning, Ian Goodfellow, Yoshua Bengio and Aaron Courville
4. Neural Networks: The mechanics of backpropagation
5. Machine Learning

Also see
1.My book ‘Practical Machine Learning with R and Python’ on Amazon
2. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
3. Designing a Social Web Portal
4. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables
4. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
6. Presentation on “Intelligent Networks, CAMEL protocol, services & applications
7. Design Principles of Scalable, Distributed Systems

To see all posts see Index of posts

Neural Networks: The mechanics of backpropagation


The initial work in the  ‘Backpropagation Algorithm’  started in the 1980’s and led to an explosion of interest in Neural Networks and  the application of backpropagation

The ‘Backpropagation’ algorithm computes the minimum of an error function with respect to the weights in the Neural Network. It uses the method of gradient descent. The combination of weights in a multi-layered neural network, which minimizes the error/cost function is considered to be a solution of the learning problem.

neuron-1

In the Neural Network above
out_{o1} =\sum_{i} w_{i}*x_{i}
E = 1/2(target - out)^{2}
\partial E/\partial out= 1/2*2*(target - out) *-1 = -(target - out)
\partial E/\partial w_{i} =\partial E/\partial y* \partial y/\partial w_{i}
\partial E/\partial w_{i} = -(target - out) * x_{i}

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

Perceptrons and single layered neural networks can only classify, if the sample space is linearly separable. For non-linear decision boundaries, a multi layered neural network with  backpropagation is required to generate more complex boundaries.The backpropagation algorithm, computes the minimum of the error function in weight space using the method of gradient descent. This computation of the gradient, requires the activation function to be both differentiable and continuous. Hence the sigmoid or logistic function is typically chosen as the activation function at every layer.

This post looks at a 3 layer neural network with 1 input, 1 hidden and 1 output. To a large extent this post is based on Matt Mazur’s detailed “A step by step backpropagation example“, and Prof Hinton’s “Neural Networks for Machine Learning” at Coursera and a few other sources.

While Matt Mazur’s post uses example values, I generate the formulas for the gradient derivatives for each weight in the hidden and input layers. I intend to implement a vector version of backpropagation in Octave, R and Python. So this post is a prequel to that.

The 3 layer neural network is as below

nn

Some basic derivations which are used in backpropagation

Chain rule of differentiation
Let y=f(u)
and u=g(x) then
\partial y/\partial x = \partial y/\partial u * \partial u/\partial x

An important result
y=1/(1+e^{-z})
Let x= 1 + e^{-z}  then
y = 1/x
\partial y/\partial x = -1/x^{2}
\partial x/\partial z = -e^{-z}

Using the chain rule of differentiation we get
\partial y/\partial z = \partial y/\partial x * \partial x/\partial z
=-1/(1+e^{-z})^{2}* -e^{-z} = e^{-z}/(1+e^{-z})^{2}
Therefore \partial y/\partial z = y(1-y)                                   -(A)

1) Feed forward network
The net output at the 1st hidden layer
in_{h1} = w_{1}i_{1} + w_{2}i_{2} + b_{1}
in_{h2} = w_{3}i_{1} + w_{4}i_{2} + b_{1}

The sigmoid/logistic function function is used to generate the activation outputs for each hidden layer. The sigmoid is chosen because it is continuous and also has a continuous derivative

out_{h1} = 1/1+e^{-in_{h1}}
out_{h2} = 1/1+e^{-in_{h2}}

The net output at the output layer
in_{o1} = w_{5}out_{h_{1}} +  w_{6}out_{h_{2}} + b_{2}
in_{o2} = w_{7}out_{h_{1}} +  w_{8}out_{h_{2}} + b_{2}

Total error
E_{total} = 1/2\sum (target - output)^{2}
E_{total} = E_{o1} + E_{o2}
E_{total} = 1/2(target_{o_{1}} - out_{o_{1}})^{2} + 1/2(target_{o_{2}} - out_{o_{2}})^{2}

2)The backwards pass
In the backward pass we need to compute how the squared error changes with changing weight. i.e we compute \partial E_{total}/\partial w_{i} for each weight w_{i}. This is shown below

A squared error is assumed

Error gradient  with w_{5}

output
 \partial E_{total}/\partial w_{5} = \partial E_{total}/\partial out_{o_{1}} * \partial out_{o_{1}}/\partial in_{o_{1}} * \partial in_{o_{1}}/ \partial w_{5}                -(B)

Since
E_{total} = 1/2\sum (target - output)^{2}
E_{total} = 1/2(target_{o_{1}} - out_{o_{1}})^{2} + 1/2(target_{o_{2}} - out_{o_{2}})^{2}
 \partial E _{total}/\partial out_{o1} = \partial E_{o1}/\partial out_{o1} + \partial E_{o2}/\partial out_{o1}
 \partial E _{total}/\partial out_{o1} = \partial /\partial _{out_{o1}}[1/2(target_{01}-out_{01})^{2}- 1/2(target_{02}-out_{02})^{2}]
 \partial E _{total}/\partial out_{o1} = 2 * 1/2*(target_{01} - out_{01}) *-1 + 0

Now considering the 2nd term in (B)
\partial out_{o1}/\partial in_{o1} = \partial/\partial in_{o1} [1/(1+e^{-in_{o1}})]

Using result (A)
 \partial out_{o1}/\partial in_{o1} = \partial/\partial in_{o1} [1/(1+e^{-in_{o1}})] = out_{o1}(1-out_{o1})

The 3rd term in (B)
 \partial in_{o1}/\partial w_{5} = \partial/\partial w_{5} [w_{5}*out_{h1} + w_{6}*out_{h2}] = out_{h1}
 \partial E_{total}/\partial w_{5}=-(target_{o1} - out_{o1}) * out_{o1} *(1-out_{o1}) * out_{h1}

Having computed \partial E_{total}/\partial w_{5}, we now perform gradient descent, by computing a new weight, assuming a learning rate \alpha
 w_{5}^{+} = w_{5} - \alpha * \partial E_{total}/\partial w_{5}

If we do this for  \partial E_{total}/\partial w_{6} we would get
 \partial E_{total}/\partial w_{6}=-(target_{02} - out_{02}) * out_{02} *(1-out_{02}) * out_{h2}

3)Hidden layer

hidden
We now compute how the total error changes for a change in weight w_{1}
 \partial E_{total}/\partial w_{1}= \partial E_{total}/\partial out_{h1}* \partial out_{h1}/\partial in_{h1} * \partial in_{h1}/\partial w_{1} – (C)

Using
E_{total} = E_{o1} + E_{o2} we get
 \partial E_{total}/\partial w_{1}= (\partial E_{o1}/\partial out_{h1}+  \partial E_{o2}/\partial out_{h1}) * \partial out_{h1}/\partial in_{h1} * \partial in_{h1}/\partial w_{1}
\partial E_{total}/\partial w_{1}=(\partial E_{o1}/\partial out_{h1}+  \partial E_{o2}/\partial out_{h1}) * out_{h1}*(1-out_{h1})*i_{1}     -(D)

Considering the 1st term in (C)
 \partial E_{total}/\partial out_{h1}= \partial E_{o1}/\partial out_{h1}+  \partial E_{o2}/\partial out_{h1}

Now
 \partial E_{o1}/\partial out_{h1} = \partial E_{o1}/\partial out_{o1} *\partial out_{o1}/\partial in_{01} * \partial in_{o1}/\partial out_{h1}
 \partial E_{o2}/\partial out_{h1} = \partial E_{o2}/\partial out_{o2} *\partial out_{o2}/\partial in_{02} * \partial in_{o2}/\partial out_{h1}

which gives the following
 \partial E_{o1}/\partial out_{o1} *\partial out_{o1}/\partial in_{o1} * \partial in_{o1}/\partial out_{h1} =-(target_{o1}-out_{o1}) *out_{o1}(1-out_{o1})*w_{5} – (E)
 \partial E_{o2}/\partial out_{o2} *\partial out_{o2}/\partial in_{02} * \partial in_{o2}/\partial out_{h1} =-(target_{o2}-out_{o2}) *out_{o2}(1-out_{o2})*w_{6} – (F)

Combining (D), (E) & (F) we get
\partial E_{total}/\partial w_{1} = -[(target_{o1}-out_{o1}) *out_{o1}(1-out_{o1})*w_{5} + (target_{o2}-out_{o2}) *out_{o2}(1-out_{o2})*w_{6}]*out_{h1}*(1-out_{h1})*i_{1}

This can be represented as
\partial E_{total}/\partial w_{1} = -\sum_{i}[(target_{oi}-out_{oi}) *out_{oi}(1-out_{oi})*w_{j}]*out_{h1}*(1-out_{h1})*i_{1}

With this derivative a new value of w_{1} is computed
 w_{1}^{+} = w_{1} - \alpha * \partial E_{total}/\partial w_{1}

Hence there are 2 important results
At the output layer we have
a)  \partial E_{total}/\partial w_{j}=-(target_{oi} - out_{oi}) * out_{oi} *(1-out_{oi}) * out_{hi}
At each hidden layer we compute
b) \partial E_{total}/\partial w_{k} = -\sum_{i}[(target_{oi}-out_{oi}) *out_{oi}(1-out_{oi})*w_{j}]*out_{hk}*(1-out_{hk})*i_{k}

Backpropagation, was very successful in the early years,  but the algorithm does have its problems for e.g the issue of the ‘vanishing’ and ‘exploding’ gradient. Yet it is a very key development in Neural Networks, and  the issues with the backprop gradients have been addressed through techniques such as the  momentum method and adaptive learning rate etc.

In this post. I derive the weights at the output layer and the hidden layer. As I already mentioned above, I intend to implement a vector version of the backpropagation algorithm in Octave, R and Python in the days to come.

Watch this space! I’ll be back

P.S. If you find any typos/errors, do let me know!

References
1. Neural Networks for Machine Learning by Prof Geoffrey Hinton
2. A Step by Step Backpropagation Example by Matt Mazur
3. The Backpropagation algorithm by R Rojas
4. Backpropagation Learning Artificial Neural Networks David S Touretzky
5. Artificial Intelligence, Prof Sudeshna Sarkar, NPTEL

Also see my other posts
1. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
2. Design Principles of Scalable, Distributed Systems
3. A method for optimal bandwidth usage by auctioning available bandwidth using the OpenFlow protocol
4. De-blurring revisited with Wiener filter using OpenCV
5. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and tables
6. Re-introducing cricketr! : An R package to analyze performances of cricketers

To see all my posts go to ‘Index of Posts

Close encounters with the future


ss

Published in Telecom Asia, Oct 22,2013 – Close encounters with the future

Where a calculator on the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have only 1,000 vacuum tubes and perhaps weigh 1.5 tons.—POPULAR MECHANICS, 1949

Introduction: Ray Kurzweil in his non-fiction book “The Singularity is near – When humans transcend biology” predicts that by the year 2045 the Singularity will allow humans to transcend our ‘frail biological bodies’ and our ‘petty, derivative and circumscribed brains’ . Specifically the book claims “that there will be a ‘technological singularity’ in the year 2045, a point where progress is so rapid it outstrips humans’ ability to comprehend it. Irreversibly transformed, people will augment their minds and bodies with genetic alterations, nanotechnology, and artificial intelligence”.

He believes that advances in robotics, AI, nanotechnology and genetics will grow exponentially and will lead us into a future realm of intelligence that will far exceed biological intelligence. This explosion will be the result of ‘accelerating returns from significant advances in technology”

Futurescape

Here is a look at some of the more fascinating key trends in technology. You can decide whether we are heading to Singularity or not.

Autonomous Vehicles (AVs): Self driving cars have moved from the realm of science fiction to reality in recent times. Google’s autonomous cars has already driven around half a million miles. All the major car manufacturers of the world from BMW, Mercedes, Toyota, Nissan, Ford or GM are all coming with their own versions of autonomous cars. These cars are equipped with Adaptive Cruise Control and Collision Avoidance technologies and are already taking away control drivers. Moreover AVs alert drivers, if their attention strays from the road ahead, for too long. Autonomous Vehicles work with the help of Vehicular Communication Technology.

Vehicular Communication along with the Intelligent Transport Systems (ITS) achieves safety by enabling communication between vehicles, people and roads. Vehicle-to-vehicle communications are the fundamental building block of autonomous, self-driving cars. It enables the exchange of data between vehicles and allows automobiles to “see” and adapt to driving obstacles more completely, preventing accidents besides resulting in more efficient driving.

Smart Assistants: From the defeat of Kasparov in chess by IBM’s Deep Blue in 1997, and then subsequently to  the resounding victory of IBM’s Watson in Jeopardy, capable of understanding natural human language, to the more prevalent Apple’s intelligent assistant Siri, Artificially Intelligent  (AI) systems have come a long way. The newest trend in this area is Smart Assistants.  Robots are currently analyzing documents, filling prescriptions, and handling other tasks that were once exclusively done by humans. Smart Assistants are already taking over the tasks of BPO operators, paralegals, store clerks, baby sitters. Robots, in many ways, are not only smarter than humans, but also do not get easily bored,

Intelligent homes and intelligent offices. Rapid advances in technology will be closer to the home both literally and figuratively. The future home will have the ability to detect the presence of people, pets, smoke and changes to humidity, moisture, lighting, temperature. Smart devices will monitor the environment and take appropriate steps to save energy, improve safety and enhance security of homes.  Devices will start learning your habits and enhance your comfort and convenience. Everything from thermostats, fire detectors, washing machines, refrigerators will be equipped electronics that will be capable of adapting to the environment. All gadgets at home will be accessible through laptops, tablets or smartphones from anywhere. We will be able to monitor all aspects of our intelligent home from anywhere.

Smart devices will also make major inroads into offices leading to the birth of intelligent offices where the lighting, heating, cooling will be based on the presence of people in the offices. This will result in an enormous savings in energy. The advances in intelligent homes and intelligent offices will be in the greater context of the Smart Grid.

Swarms of drones: Contrary to the use of weaponized drones for unmanned aerial survey of enemy territory we will soon have commercial drones. Drone will start being used for civilian purposes.  The most compelling aspect of drones these days is the fact that they can be easily manufactured in large quantities, are cheap and can perform complex tasks either singly or collectively. Remotely controlled drones can perform hundreds of civilian jobs, including traffic monitoring, aerial surveying, and oil pipeline inspections and monitoring of crop conditions. Drones are also being employed for conservation of wildlife. In the wilderness of Africa, drones are already helping in providing aerial footage of the landscape, tracking poachers and in also herding elephants. However, before drones become a common sight, it is necessary to ensure that appropriate laws are made for maintaining the safety and security of civilians. This is likely to happen in US in 2015, when the Federal Aviation Administration (FAA) will come up with rules to safely integrate drones into the American skies.

MOOC (Massive Online Open Course): The concept of MOOC, or the ‘Massive Open Online Course’ from top colleges, though just a few years old, is already taking the world by storm. Coursera, edX and Udacity are the top 3 MOOCs besides many others and offer a variety of courses on technology, philosophy, sociology, computer science etc.  As more courses are available online, the requirements of having a uniform start and end date will diminish gradually. The availability of course lectures at all times and through all devices, namely the laptop, tablet or smartphone, will result in large scale adoption by students of all ages.

Contrary to regimented classes MOOCs now allow students to take classes at their own pace. It is likely that some students will breeze through an entire semester worth of classes in a few weeks. It is also likely that a few students will graduate in 4 years with more than a couple of degrees. MOOCs are a natural development considering that the world is going to be more knowledge driven where there will be the need for experts with a diverse set of in-depth skills. Here is an interesting article in WSJ “What College will be like in 2023

3D Printing: This is another technology that is bound to become ubiquitous in our future. 3D printers will revolutionize manufacturing in ways we could never imagine. A 3-D printer is similar to a hot-glue gun attached to a robotic arm. A 3-D printer creates an object by stacking one layer of material, typically plastic or metal, on top of another.  3D printers have been used for making everything from prosthetic limbs, phone cases, lamps all the way to a NASA funded 3D pizza. Here is a great article in New York Times “Dinner is Printed” It is likely that a 3D printer would be indispensable to our future homes much like the refrigerator and microwave.

Artificial sense organs: A recent news items in Science 2.0 “The Future touch sensitive prosthetic limbs”   discusses the invention of a prosthetic limb that can actually provide the sense of touch by stimulating the regions of the brain that deal with the sense of touch. The researchers identified the neural activity that occurs when grasping or feeling an object and successfully induced these patterns in the brain. Two parallel efforts are underway to understand how the human brain works. They are “The Human Brain Project” which has 130 members of the European Union and Obama’s BRAIN project. Both these projects attempt to ‘to give us a deeper and more meaningful understanding of how the human brain operates”. Possibilities as in the movies ‘Avatar’ or ‘Terminator’ may not be far away.

The Others: Besides the above, technologies like Big Data, Cloud Computing, Semantic Web, Internet of Things and Smart Grid will also be swamp us in the future and much has already been said about it.

Conclusion: The above sets of technologies represent seismic shifts and are bound to explode in our future in a million ways.

Given the advances in bionic limbs, Machine Intelligent AI systems, MOOCs, Autonomous Vehicles are we on target for the Singularity?

I wouldn’t be surprised at all!

Find me on Google+

The computer is not a dumb machine!


computer“The computer is a dumb machine. It needs to be told what to do at every step”. How often have we heard of this refrain from friends and those who have only an incidental interaction with computers?  To them a computer is like a ball which has to be kicked from place to place. These people are either ignorant of computers, say it by force of habit or have a fear of computers. However this is so far from the truth.  In this post, my 100th, I come to the defense of the computer in a slightly philosophical way.

The computer is truly a marvel of technology. The computer truly embodies untapped intelligence. In my opinion even a safety pin is frozen intelligence. From a piece of metal the safety pin can now hold things together while pinning them, besides incorporating an aspect of safety.

Stating that the computer is a dumb machine is like saying that a television is dumb and an airplane is dumber.  An airplane probably represents a modern miracle in which the laws of flight are built into every nut and bolt that goes into the plane. The electronics and the controls enable it to lift off, fly and land with precision  and perform a miracle in every flight.

Similarly a computer from the bare hardware to the upper most layer of software is nothing but layer and layer of human ingenuity, creativity and innovation. At the bare metal the hardware of the computer is made up integrated chips that work at the rate of 1 billion+ instructions per second. The circuits are organized so precisely that they are able to work together and produce a coherent output, all at blazing speeds of less than a billionth of a second.

computer3

On top of the bare bones hardware we have some programs that work at the assembly and machine code made of 0’s and 1’s. The machine code is nothing more than an amorphous strings of 0’s and 1’s. At this level   the thing that is worked on (object) and the thing that works on it (subject) are indistinguishable. There is no subject and object at this level. What distinguishes then is the context.

Over this layer we have the Operating System (OS) which I would like to refer to as the mind of the computer. The OS is managing many things all at once much like the mind has complete control over sense organs which receive external input. So the OS manages processes, memory, devices and CPU (resources)

As humans, we like to pride ourselves that we have consciousness. Rather than going into any metaphysical discussion on what consciousness is or isn’t it is clear that the OS keeps the computer completely conscious of the state of all its resources.  So just like we react to data received through our sense organs the computer reacts to input received through devices (mouse, keyboard) or its memory etc. So does the computer have consciousness?

You say human beings are capable of thought. So what is thought but some sensible evaluation of known concepts? In a way the OS is also constant churning in the background trying to make sense of the state of the CPU, the memory or the disk.

Not to give in I can hear you say “But human beings understand choice”. Really! So here is my program for a human being

If provoked
Get angry

If insulted
Get hurt

If ego stoked
Go mad with joy

Just kidding! Anyway the recent advances in cognitive computing now show it is possible to have computers choose the best alternative. IBM’s Watson is capable of evaluation alternative choices.

Over the OS we have compilers and above that we have several applications.
The computer truly represents layers and layers of solidified human thought. Whether it is the precise hardware circuitry, the OS, compilers, or any application they are all result of human thought and they are constantly working in the computer.

So if your initial attempt to perform something useful did not quite work out, you must understand that you are working with decades of human thought embodied in the computer. So your instructions should be precise and logical. Otherwise your attempts will be thwarted.

computer1
So whether it’s the computer, the mobile or your car, we should look and appreciate the deep beauty that resides in these modern conveniences, gadgets or machinery.

Find me on Google+

Working with binary trees in Lisp


Success finally! I have been trying to create and work with binary trees in Lisp for sometime. It is really difficult in Lisp in which there are no real data structures. Trees are so obvious in a language like C where one can visualize the left & right branches with pointers. The structure of a node in C is

struct node tree{
int value
struct node *left;
struct node *right;
}

Lisp has no such structures. Lisp is a list or at best a list of lists. It is really how the program interprets the nested list of lists that makes the list a binary or a n-ary tree. As I mentioned before I have not had a whole lot of success in creating the binary tree in Lisp for quite sometime. Finally I happened to come across a small version of adding an element to a set in “Structure and Interpretation of Computer Programs (SICP)” by Harold Abelson, Gerald and Julie Sussman. This is probably one of the best books I have read in a long time and contains some truly profound insights into computer programming.

I adapted the Scheme code into my version of a adding a node. Finally I was able to code inorder, pre-order and post order traversals.

(Note: You can clone the code below from GitHub : Binary trees in Lisp)

In this version the a node of a binary tree is represented as
(node left right) so
node -> (car tree)
left_branch -> (cadr tree)
right_branch is -> (caddr tree)

Here is the code
(defun entry (tree)
(car tree))

(defun left-branch (tree)
(cadr tree))

(defun right-branch (tree)
(caddr tree))

//Create node in a binary tree
(defun make-tree (entry left right)
(list entry left righta))

// Insert element into tree
add (x tree)
(cond ((null tree) (make-tree x nil nil))
((= x (entry tree)) tree)
((< x (entry tree)) (make-tree (entry tree) (add x
(left-branch tree)) (right-branch tree)))

((> x (entry tree)) (make-tree (entry tree)
(left-branch tree) (add x (right-branch tree))))))

So I can now create a tree with a create-tree function
(defun create-tree(elmnts)
(dolist (x elmnts)
(setf tree (add x tree)))
)

(setf tree nil)
NIL

(setf lst (list 23 12 1 4 5 28 4 9 10 45 89))
(23 12 1 4 5 28 4 9 10 45 89)

(create-tree lst)
NIL

Now I display the tree
tree

(23 (12 (1 NIL (4 NIL (5 NIL (9 NIL (10 NIL NIL))))) NIL) (28 NIL (45 NIL (89 NIL NIL))))

This can be represented pictorially as

Now I created the 3 types of traversals
(defun inorder (tree)
(cond ((null tree))
(t (inorder (left-branch tree))

(print (entry tree))
(inorder (right-branch tree))))))

(defun preorder (tree)
(cond ((null tree))

(t (print (entry tree))
(preorder (left-branch tree))
(preorder (right-branch tree))))))
(defun postorder (tree)
(cond ((null tree))
(t (postorder (left-branch tree))
(postorder (right-branch tree))

(print (entry tree)))))
[89]> (inorder tree)

1
4
5
9
10
12
23
28
45
89

[90]> (preorder tree)
23
12
1
4
5
9
10
28
45
89
T

[91]> (postorder tree)
10
9
5
4
1
12
89
45
28
23
23

Note:  A couple of readers responded to me saying that I very wrong in saying that Lisp has no data structures. I would tend to agree that Lisp would have evolved over the years to include data structures. I hope to pick on Lisp some time later from where I left off!. Till that time….

You may also like
1. A crime map of India in R: Crimes against women
2.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
3.  Bend it like Bluemix, MongoDB with autoscaling – Part 2
4. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
5. Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data

For all posts see Index of posts

Find me on Google+

Taking baby steps in Lisp


Lisp can be both fascinating and frustrating. Fascinating, because you can write compact code to solve really complex problems. Frustrating, because you can easily get lost in its maze of parentheses. I, for one, have been truly smitten by Lisp. My initial encounter with Lisp did not yield much success as I tried to come to terms with its strange syntax. The books I read on the Lisp language typically gloss over the exotic features of Lisp like writing Lisp code to solve the Towers of Hanoi or the Eight Queens problem. They talk about functions returning functions, back quotes and macros that can make your head spin.

I found this approach extremely difficult to digest the language. So I decided to view Lisp through the eyes of any other regular programming language like C, C++,, Java, Perl, Python or Ruby. I was keen on being able to do regular things with Lisp before I try out its unique features. So I decided to investigate Lisp from this view point and learn how to make Lisp do mundane things like an assignment, conditional, loop, array, input and output etc.

This post is centered on this fact.

Assignment statement

The most fundamental requirement for any language is to perform an assignment. For e.g. these are assignment statements in Lisp and its equivalent in C for e.g.

$ (setf x 5)                                                         -> $ x = 5
$ (setf x (+  (* y 2) (* z 8))                               -> $x = 2y + 8z

Conditional statement
 
There are a couple of forms of conditional statement in Lisp. The most basic is the ‘if’ statement which is special case. You can do if-then-else without the possibility of if-then-else if-else if – else

if (condition) statement else-statement

In Lisp this is written as
$(setf x 5)
$ (if (= x 5)
(setf x  (+ x 5))
(setf  (- x 6)))
10

In C this equivalent to
$ x = 5
$ if (x == 5)
x = x + 5;
else
x = x -6;

However Lisp allows the if-then-else if – else if –else through the use of the COND statement

So we could write

$ (setf x 10)
$ (cond ((< x 5) (setf x (+ x 8)) (setf y (* 2 y)))
((= x 10) (setf x (* x 2)))
(t (setf x 8)))
20

The above statement in C would be
$ x = 2
$ y = 10
$ if (x < 5)
{
x = x + 8;
y = 2 * y;
}
else if (x == 10)
{
x = x * 2;
}
else
x = 8;

Loops
Lisp has many forms of loops dotimes, dolist, do , loop for etc. I found the following most intuitive and best to get started with
$  (setf x 5)
$ (let ((i 0))
(loop
(setf y (* x i))
(when (> i 10) (return))
(print i) (prin1 y)
(incf i
)))

In C this could be written as
$ x = 5
(for i = 0; i < 10; i++)
{
y = x * i
printf(“%d %d\n”,i,y);
}

Another easy looping construct in C is
(loop for x from 2 to 10 by 3
do (print x))
In C this would be
(for x=2; x < 10; x = x+3)
print x;

Arrays
To create an array of 10 elements with initial value of 20
(setf numarray (make-array 10 :initial-element 20))
#(20 20 20 20 20 20 20 20 20 20)
To read an array element it is
$ (aref  numarray 3)                    – – – > numarray[3]
For e.g.
(setf x (* 2 (aref numarray 4)))     – – – – > x = numarray[4] * 2

Functions
(defun square (x)
(* x x))
This is the same as

int square (x)
{
return (x * x)
}

While in C you would invoke the function as
y = square (8)

In Lisp you would write as
(setf y (square 8))

Note: In Lisp the function is invoked as (function arg1 arg2… argn) instead of (function (arg1 arg2  … argn))

Structures
a) Create a global variable *db*
(defvar *db* nil)
 

b) Make a function to add an employee
$(defun make-emp (name age title)
(list :name name :age age :title title))
$(add-emp (make-emp “ganesh” 49 “manager”))
$(add-emp (make-emp “manish” 50 “gm”))
$(add-emp (make-emp “ram” 46 “vp”))
$ (dump-db)

For a more complete and excellent post on managing a simple DB looks at Practical Common Lisp by Peter Siebel

Reading and writing to standard output
To write to standard output you can use
(print “This is a test”) or
(print ‘(This is a test))
To read from standard input use
(let ((temp 0))
(print ‘(Enter temp))
(setf temp (read))
(print (append ‘(the temp is) (list temp))))

Reading and writing to a file
The typical way to do this is to use

a) Read
(with-open-file (stream “C:\\acl82express\\lisp\\count.cl”)
(do ((line (read-line stream nil)
(read-line stream nil)))
((null line))
(print line)))

b) Write
(with-open-file (stream “C:\\acl82express\\lisp\\test.txt”
:direction :output
:if-exists :supersede)
(write-line “test” stream)
nil)

I found the following construct a lot easier
(let ((in (open “C:\\acl82express\\lisp\\count.cl” :if-does-not-exist nil)))
(when in
(loop for line = (read-line in nil)
while line do (format t “~a~%” line))
(close in)))

With the above you can get started on Lisp. However with just the above constructs the code one writes will be very “non-Lispy”. Anyway this is definitely a start.

Find me on Google+

The Future of Programming Languages


How will the computing landscape evolve as we move towards the next millennium? Clearly the computer architecture will evolve towards a more parallel architecture with multiple CPUs each handling a part of the problem in parallel. However, programming parallel architectures is no simple task and will challenge the greatest minds.

In the future where the problems and the architectures will be extremely complex, the programming language will itself evolve towards simplicity. The programming language will be based on natural language that we use to define problems. Behinds the scenes of the natural language interface will be complex algorithms of Artificial Intelligence which will perform the difficult task of specifying the definition of problem into a high level programming language like C++, Java etc.

The Artificial Intelligence interface will handle the task of creating variables, forming loops and actually defining classes and handling errors. The code generated by the machine will have far less syntactical errors than those created by human beings. However while a large set of problems can be solved by the AI interface there will be a certain class of problems which will still require human intervention and the need to work in the languages of today.

One of the drivers for this natural language of programming of the future, besides the complexity of the computer architecture is the need to bring a larger section of domain experts to solve the problems in their fields without the need to learn all the complex syntax, and semantics of the current programming languages.

This will allow everybody from astrophysicists, geneticists, historians and statisticians to be able to utilize the computer to solve the difficult problems in their domain.

Find me on Google+

Ramblings on Lisp


In the world of programming languages Lisp can be considered truly ancient along with its companion FORTRAN. However it has survived till this day. Lisp had its origins as early as 1958 when John McCarthy of MIT published the design of the language in an ACM paper titled “Recursive Functions of Symbolic Expressions and Their Computation by Machine”. Lisp was invented by McCarthy as a mathematical notation of computer programs and is based on Lambda Calculus described by Alonzo Church in 1930.

Lisp has not had the popularity of other more recent languages like C, C++ and Java partly because it has an unfamiliar syntax and also has a very steep learning curve. The Lisp syntax can be particularly intimidating to beginners with its series of parentheses. However it is one of predominant language that is used in AI domain.

Some of the key characteristics of Lisp are

Lisp derives its name from LISt Processing. Hence while most programming languages try to compute on data, Lisp computes on a data structure namely the list. A list can be visualized as a collection of elements which can be either data or functions or lists themselves. Its power comes from the fact that the language includes in its syntax some of the operations that can be performed on a lists and lists of lists. In fact many key features of the Lisp Language have found their way into more current languages like Python, Ruby and Perl.

Second Lisp is a symbolic processing language. This ability to manipulate symbols gives Lisp a powerful edge over other Programming Languages in AI domains like theorem proving or natural language processing.

Thirdly Lisp uses a recursive style of programming. This makes the code much shorter than other languages. Recursion enables the expression of the problem as a combination of a terminating condition and a self describing sub problem.  However the singular advantage that Lisp has over other programming languages is that it uses a technique called “tail recursion” . The beauty of tail recursion is that computing  space is of the order of O(1) and not O(n) that is common in languages like C,C++,Java where the size of the stack grows with each subsequent recursive call.

Lisp blurs the distinction between functions and data. Functions use other functions as arguments during computations. Lisp lists are functions that operate on other functions and data in a self repeating fashion.

The closest analogy to this is to think of machine code which is sequence of 32 bit binary words. Both the logic and the data on which they operate are 32 bit binary words and cannot be distinguished unless one knows where the program is supposed to start executing. If one were to take the snapshot of consecutive memory locations we will encounter 32 bit binary words which represent either a logical or arithmetic operation on data which are also 32 bit binary words.

Lisp is a malleable language and allows the programmer to tailor the language for his own convenience. It allows the programmer to manipulate the language so that it suits the programming style of the programmer. Lisp programs evolve from the bottom-up rather from the top-down style adopted in other languages. The design methodology of Lisp programs takes an inside-out approach rather than an outside-in method.

Lisp has many eminent die-hard adherents who swear by the elegance and beauty of being able to solve difficult problems concisely. On the other hand there are those to whom Lisp represents an ancient Pharaoh’s curse that is difficult to get rid of.

However with Lisp, “Once smitten, you remain smitten”.

Find me on Google+