# Player Performance Estimation using AI Collaborative Filtering

## 1. Introduction

Often times before crucial matches, or in general, we would like to know the performance of a batsman against a bowler or vice-versa, but we may not have the data. We generally have data where different batsmen would have faced different sets of bowlers with certain performance data like ballsFaced, totalRuns, fours, sixes, strike rate and timesOut. Similarly different bowlers would have performance figures(deliveries, runsConceded, economyRate and wicketTaken) against different sets of batsmen. We will never have the data for all batsmen against all bowlers. However, it would be good estimate the performance of batsmen against a bowler, even though we do not have the performance data. This could be done using collaborative filtering which identifies and computes based on the similarity between batsmen vs bowlers & bowlers vs batsmen.

This post shows an approach whereby we can estimate a batsman’s performance against bowlers even though the batsman may not have faced those bowlers, based on his/her performance against other bowlers. It also estimates the performance of bowlers against batsmen using the same approach. This is based on the recommender algorithm which is used to recommend products to customers based on their rating on other products.

This idea came to me while generating the performance of batsmen vs bowlers & vice-versa for 2 IPL teams in this IPL 2022 with my Shiny app GooglyPlusPlus in the optimization tab, I found that there were some batsmen for which there was no data against certain bowlers, probably because they are playing for the first time in their team or because they were new (see picture below)

In the picture above there is no data for Dewald Brevis against Jasprit Bumrah and YS Chahal. Wouldn’t be great to estimate the performance of Brevis against Bumrah or vice-versa? Can we estimate this performance?

While pondering on this problem, I realized that this problem formulation is similar to the problem formulation for the famous Netflix movie recommendation problem, in which user’s ratings for certain movies are known and based on these ratings, the recommender engine can generate ratings for movies not yet seen.

This post estimates a player’s (batsman/bowler) using the recommender engine This post is based on R package recommenderlab

“Michael Hahsler (2021). recommenderlab: Lab for Developing and Testing Recommender Algorithms. R package version 0.2-7. https://github.com/mhahsler/recommenderlab

Note 1: Thw data for this analysis is taken from Cricsheet after being processed by my R package yorkr.

You can also read this post in RPubs at Player Performance Estimation using AI Collaborative Filtering

A PDF copy of this post is available at Player Performance Estimation using AI Collaborative Filtering.pdf

You can download this R Markdown file and the associated data and perform the analysis yourself using any other recommender engine from Github at playerPerformanceEstimation

## Problem statement

In the table below we see a set of bowlers vs a set of batsmen and the number of times the bowlers got these batsmen out.
By knowing the performance of the bowlers against some of the batsmen we can use collaborative filter to determine the missing values. This is done using the recommender engine.

The Recommender Engine works as follows. Let us say that there are feature vectors $x^1$, $x^2$ and $x^3$ for the 3 bowlers which identify the characteristics of these bowlers (“fast”, “lateral drift through the air”, “movement off the pitch”). Let each batsman be identified by parameter vectors $\theta^1$, $\theta^2$ and so on

For e.g. consider the following table

Then by assuming an initial estimate for the parameter vector $\theta$ and the feature vector xx we can formulate this as an optimization problem which tries to minimize the error for $\theta^T*x$ This can work very well as the algorithm can determine features which cannot be captured. So for e.g. some particular bowler may have very impressive figures. This could be due to some aspect of the bowling which cannot be captured by the data for e.g. let’s say the bowler uses the ‘scrambled seam’ when he is most effective, with a slightly different arc to the flight. Though the algorithm cannot identify the feature as we know it, but the ML algorithm should pick up intricacies which cannot be captured in data.

Hence the algorithm can be quite effective.

Note: The recommender lab performance is not very good and the Mean Square Error is quite high. Also, the ROC and AUC curves show that not in aLL cases the algorithm is doing a clean job of separating the True positives (TPR) from the False Positives (FPR)

Note: This is similar to the recommendation problem

The collaborative optimization object can be considered as a minimization of both $\theta$ and the features x and can be written as

J($x^{(1)},x^{(2)},..x^{(n_{u})}$, $\theta^{(1)},\theta^{(2)},..,\theta^{(n_{m})}$}= 1/2$\sum(\theta^{j})^{T}x^{i}- y^{(i,j)})^{2} + \lambda\sum\sum (x_{k}^{i})^{2} + \lambda\sum\sum (_\theta{k}^{j})^{2}$

The collaborative filtering algorithm can be summarized as follows

1. Initialize $\theta^1$, $\theta^2$$\theta^{n_{u}}$ and the set of features be $x^1$,$x^2$, … ,$x^{n_{m}}$ to small random values
2. Minimize J($\theta^1$, $\theta^2$$\theta^{n_{u}}$,$x^1$, $x^2$, … ,$x^{n_{m}}$) using gradient descent. For every
j=1,2, …$n_{u}$, i= 1,2,.., $n_{m}$
3. $x_{k}^{i}$ := $x_{k}^{i}$$\alpha$ ( $\sigma$ $(\theta^j)^T$)$x^i$$y^(i,j)\theta_{k}^{j} + \lambda x_{k}^i$

&

$\theta_{k}^{i}$ := $\theta_{k}^{i}$$\alpha$ ( $\sigma$ $(\theta^j)^T)x^i - y^(i,j)\theta_{k}^{j} + \lambda x_{k}^i$
4. Hence for a batsman with parameters $\theta$ and a bowler with (learned) features x, predict the “times out” for the player where the value is not known using $\theta^Tx$

The above derivation for the recommender problem is taken from Machine Learning by Prof Andrew Ng at Coursera from the lecture Collaborative filtering

There are 2 main types of Collaborative Filtering(CF) approaches

1. User based Collaborative Filtering User-based CF is a memory-based algorithm which tries to mimics word-of-mouth by analyzing rating data from many individuals. The assumption is that users with similar preferences will rate items similarly.
2. Item based Collaborative Filtering Item-based CF is a model-based approach which produces recommendations based on the relationship between items inferred from the rating matrix. The assumption behind this approach is that users will prefer items that are similar to other items they like.

## 1a. A note on ROC and Precision-Recall curves

A small note on interpreting ROC & Precision-Recall curves in the post below

ROC Curve: The ROC curve plots the True Positive Rate (TPR) against the False Positive Rate (FPR). Ideally the TPR should increase faster than the FPR and the AUC (area under the curve) should be close to 1

Precision-Recall: The precision-recall curve shows the tradeoff between precision and recall for different threshold. A high area under the curve represents both high recall and high precision, where high precision relates to a low false positive rate, and high recall relates to a low false negative rate

library(reshape2)
library(dplyr)
library(ggplot2)
library(recommenderlab)
library(tidyr)


## 2. Define recommender lab helper functions

Helper functions for the RMarkdown notebook are created

• eval – Gives details of RMSE, MSE and MAE of ML algorithm
• evalRecomMethods – Evaluates different recommender methods and plot the ROC and Precision-Recall curves
# This function returns the error for the chosen algorithm and also predicts the estimates
# for the given data
eval <- function(data, train1, k1,given1,goodRating1,recomType1="UBCF"){
set.seed(2022)
e<- evaluationScheme(data,
method = "split",
train = train1,
k = k1,
given = given1,
goodRating = goodRating1)

r1 <- Recommender(getData(e, "train"), recomType1)
print(r1)

p1 <- predict(r1, getData(e, "known"), type="ratings")
print(p1)

error = calcPredictionAccuracy(p1, getData(e, "unknown"))

print(error)
p2 <- predict(r1, data, type="ratingMatrix")
p2
}
# This function will evaluate the different recommender algorithms and plot the AUC and ROC curves
evalRecomMethods <- function(data,k1,given1,goodRating1){
set.seed(2022)
e<- evaluationScheme(data,
method = "cross",
k = k1,
given = given1,
goodRating = goodRating1)

models_to_evaluate <- list(
IBCF Cosinus = list(name = "IBCF",
param = list(method = "cosine")),
IBCF Pearson = list(name = "IBCF",
param = list(method = "pearson")),
UBCF Cosinus = list(name = "UBCF",
param = list(method = "cosine")),
UBCF Pearson = list(name = "UBCF",
param = list(method = "pearson")),
Zufälliger Vorschlag = list(name = "RANDOM", param=NULL)
)

n_recommendations <- c(1, 5, seq(10, 100, 10))
list_results <- evaluate(x = e,
method = models_to_evaluate,
n = n_recommendations)
plot(list_results, annotate=c(1,3), legend="bottomright")
plot(list_results, "prec/rec", annotate=3, legend="topleft")
}


## 3. Batsman performance estimation

The section below regenerates the performance for batsmen based on incomplete data for the different fields in the data frame namely balls faced, fours, sixes, strike rate, times out. The recommender lab allows one to test several different algorithms all at once namely

1. User based – Cosine similarity method, Pearson similarity
2. Item based – Cosine similarity method, Pearson similarity
3. Popular
4. Random
5. SVD and a few others

## 3a. Batting dataframe

head(df)

##   batsman1         bowler1 ballsFaced totalRuns fours sixes  SR timesOut
## 1 A Badoni        A Mishra          0         0     0     0 NaN        0
## 2 A Badoni        A Nortje          0         0     0     0 NaN        0
## 3 A Badoni         A Zampa          0         0     0     0 NaN        0
## 4 A Badoni     Abdul Samad          0         0     0     0 NaN        0
## 5 A Badoni Abhishek Sharma          0         0     0     0 NaN        0
## 6 A Badoni      AD Russell          0         0     0     0 NaN        0


## 3b Data set and data preparation

For this analysis the data from Cricsheet has been processed using my R package yorkr to obtain the following 2 data sets – batsmenVsBowler – This dataset will contain the performance of the batsmen against the bowler and will capture a) ballsFaced b) totalRuns c) Fours d) Sixes e) SR f) timesOut – bowlerVsBatsmen – This data set will contain the performance of the bowler against the difference batsmen and will include a) deliveries b) runsConceded c) EconomyRate d) wicketsTaken

Obviously many rows/columns will be empty

This is a large data set and hence I have filtered for the period > Jan 2020 and < Dec 2022 which gives 2 datasets a) batsmanVsBowler20_22.rdata b) bowlerVsBatsman20_22.rdata

I also have 2 other datasets of all batsmen and bowlers in these 2 dataset in the files c) all-batsmen20_22.rds d) all-bowlers20_22.rds

You can download the data and this RMarkdown notebook from Github at PlayerPerformanceEstimation

Feel free to download and analyze the data and use any recommendation engine you choose

## 3c. Exploratory analysis

Initially an exploratory analysis is done on the data

df3 <- select(df, batsman1,bowler1,timesOut)
df6 <- xtabs(timesOut ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
print(df8[1:10,1:10])

##                 A Mishra A Nortje A Zampa Abdul Samad Abhishek Sharma
## A Badoni              NA       NA      NA          NA              NA
## A Manohar             NA       NA      NA          NA              NA
## A Nortje              NA       NA      NA          NA              NA
## AB de Villiers        NA        4       3          NA              NA
## Abdul Samad           NA       NA      NA          NA              NA
## Abhishek Sharma       NA       NA      NA          NA              NA
## AD Russell             1       NA      NA          NA              NA
## AF Milne              NA       NA      NA          NA              NA
## AJ Finch              NA       NA      NA          NA               3
## AJ Tye                NA       NA      NA          NA              NA
##                 AD Russell AF Milne AJ Tye AK Markram Akash Deep
## A Badoni                NA       NA     NA         NA         NA
## A Manohar               NA       NA     NA         NA         NA
## A Nortje                NA       NA     NA         NA         NA
## AB de Villiers           3       NA      3         NA         NA
## Abdul Samad             NA       NA     NA         NA         NA
## Abhishek Sharma         NA       NA     NA         NA         NA
## AD Russell              NA       NA      6         NA         NA
## AF Milne                NA       NA     NA         NA         NA
## AJ Finch                NA       NA     NA         NA         NA
## AJ Tye                  NA       NA     NA         NA         NA


The dots below represent data for which there is no performance data. These cells need to be estimated by the algorithm

set.seed(2022)
r <- as(df8,"realRatingMatrix")
getRatingMatrix(r)[1:15,1:15]

## 15 x 15 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 15 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## A Badoni         . . . . . . . . . . . . . . .
## A Manohar        . . . . . . . . . . . . . . .
## A Nortje         . . . . . . . . . . . . . . .
## AB de Villiers   . 4 3 . . 3 . 3 . . . 4 3 . .
## Abdul Samad      . . . . . . . . . . . . . . .
## Abhishek Sharma  . . . . . . . . . . . 1 . . .
## AD Russell       1 . . . . . . 6 . . . 3 3 3 .
## AF Milne         . . . . . . . . . . . . . . .
## AJ Finch         . . . . 3 . . . . . . 1 . . .
## AJ Tye           . . . . . . . . . . . 1 . . .
## AK Markram       . . . 3 . . . . . . . . . . .
## AM Rahane        9 . . . . 3 . 3 . . . 3 3 . .
## Anmolpreet Singh . . . . . . . . . . . . . . .
## Anuj Rawat       . . . . . . . . . . . . . . .
## AR Patel         . . . . . . . 1 . . . . . . .

r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:15,1:15]

## 15 x 15 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 15 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## AB de Villiers  . 4 3 . . 3 . 3 . . . 4 3 . .
## Abdul Samad     . . . . . . . . . . . . . . .
## Abhishek Sharma . . . . . . . . . . . 1 . . .
## AD Russell      1 . . . . . . 6 . . . 3 3 3 .
## AJ Finch        . . . . 3 . . . . . . 1 . . .
## AM Rahane       9 . . . . 3 . 3 . . . 3 3 . .
## AR Patel        . . . . . . . 1 . . . . . . .
## AT Rayudu       2 . . . . . 1 . . . . 3 . . .
## B Kumar         3 . 3 . . . . . . . . . . 3 .
## BA Stokes       . . . . . . 3 4 . . . 3 . . .
## CA Lynn         . . . . . . . 9 . . . 3 . . .
## CH Gayle        . . . . . 6 . 3 . . . 6 . . .
## CH Morris       . 3 . . . . . . . . . 3 . . .
## D Padikkal      . 4 . . . 3 . . . . . . 3 . .
## DA Miller       . . . . . 3 . . . . . 3 . . .

# Get the summary of the data
summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##   1.000   3.000   3.000   3.463   4.000  21.000

# Normalize the data
r0_m <- normalize(r0)
getRatingMatrix(r0_m)[1:15,1:15]

## 15 x 15 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 15 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## AB de Villiers   .         -0.7857143 -1.7857143 .  .       -1.7857143
## Abdul Samad      .          .          .         .  .        .
## Abhishek Sharma  .          .          .         .  .        .
## AD Russell      -2.6562500  .          .         .  .        .
## AJ Finch         .          .          .         . -0.03125  .
## AM Rahane        4.6041667  .          .         .  .       -1.3958333
## AR Patel         .          .          .         .  .        .
## AT Rayudu       -2.1363636  .          .         .  .        .
## B Kumar          0.3636364  .          0.3636364 .  .        .
## BA Stokes        .          .          .         .  .        .
## CA Lynn          .          .          .         .  .        .
## CH Gayle         .          .          .         .  .        1.5476190
## CH Morris        .          0.3500000  .         .  .        .
## D Padikkal       .          0.6250000  .         .  .       -0.3750000
## DA Miller        .          .          .         .  .       -0.7037037
##
## AB de Villiers   .         -1.7857143 . . . -0.7857143 -1.785714  .         .
## Abdul Samad      .          .         . . .  .          .         .         .
## Abhishek Sharma  .          .         . . . -1.6000000  .         .         .
## AD Russell       .          2.3437500 . . . -0.6562500 -0.656250 -0.6562500 .
## AJ Finch         .          .         . . . -2.0312500  .         .         .
## AM Rahane        .         -1.3958333 . . . -1.3958333 -1.395833  .         .
## AR Patel         .         -2.3333333 . . .  .          .         .         .
## AT Rayudu       -3.1363636  .         . . . -1.1363636  .         .         .
## B Kumar          .          .         . . .  .          .         0.3636364 .
## BA Stokes       -0.6086957  0.3913043 . . . -0.6086957  .         .         .
## CA Lynn          .          5.3200000 . . . -0.6800000  .         .         .
## CH Gayle         .         -1.4523810 . . .  1.5476190  .         .         .
## CH Morris        .          .         . . .  0.3500000  .         .         .
## D Padikkal       .          .         . . .  .         -0.375000  .         .
## DA Miller        .          .         . . . -0.7037037  .         .         .


## 4. Create a visual representation of the rating data before and after the normalization

The histograms show the bias in the data is removed after normalization

r0=r[(m=rowCounts(r) > 10),]
getRatingMatrix(r0)[1:15,1:10]

## 15 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## AB de Villiers  . 4 3 . . 3 . 3 . .
## Abdul Samad     . . . . . . . . . .
## Abhishek Sharma . . . . . . . . . .
## AD Russell      1 . . . . . . 6 . .
## AJ Finch        . . . . 3 . . . . .
## AM Rahane       9 . . . . 3 . 3 . .
## AR Patel        . . . . . . . 1 . .
## AT Rayudu       2 . . . . . 1 . . .
## B Kumar         3 . 3 . . . . . . .
## BA Stokes       . . . . . . 3 4 . .
## CA Lynn         . . . . . . . 9 . .
## CH Gayle        . . . . . 6 . 3 . .
## CH Morris       . 3 . . . . . . . .
## D Padikkal      . 4 . . . 3 . . . .
## DA Miller       . . . . . 3 . . . .

#Plot ratings
image(r0, main = "Raw Ratings")

#Plot normalized ratings
r0_m <- normalize(r0)
getRatingMatrix(r0_m)[1:15,1:15]

## 15 x 15 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 15 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## AB de Villiers   .         -0.7857143 -1.7857143 .  .       -1.7857143
## Abdul Samad      .          .          .         .  .        .
## Abhishek Sharma  .          .          .         .  .        .
## AD Russell      -2.6562500  .          .         .  .        .
## AJ Finch         .          .          .         . -0.03125  .
## AM Rahane        4.6041667  .          .         .  .       -1.3958333
## AR Patel         .          .          .         .  .        .
## AT Rayudu       -2.1363636  .          .         .  .        .
## B Kumar          0.3636364  .          0.3636364 .  .        .
## BA Stokes        .          .          .         .  .        .
## CA Lynn          .          .          .         .  .        .
## CH Gayle         .          .          .         .  .        1.5476190
## CH Morris        .          0.3500000  .         .  .        .
## D Padikkal       .          0.6250000  .         .  .       -0.3750000
## DA Miller        .          .          .         .  .       -0.7037037
##
## AB de Villiers   .         -1.7857143 . . . -0.7857143 -1.785714  .         .
## Abdul Samad      .          .         . . .  .          .         .         .
## Abhishek Sharma  .          .         . . . -1.6000000  .         .         .
## AD Russell       .          2.3437500 . . . -0.6562500 -0.656250 -0.6562500 .
## AJ Finch         .          .         . . . -2.0312500  .         .         .
## AM Rahane        .         -1.3958333 . . . -1.3958333 -1.395833  .         .
## AR Patel         .         -2.3333333 . . .  .          .         .         .
## AT Rayudu       -3.1363636  .         . . . -1.1363636  .         .         .
## B Kumar          .          .         . . .  .          .         0.3636364 .
## BA Stokes       -0.6086957  0.3913043 . . . -0.6086957  .         .         .
## CA Lynn          .          5.3200000 . . . -0.6800000  .         .         .
## CH Gayle         .         -1.4523810 . . .  1.5476190  .         .         .
## CH Morris        .          .         . . .  0.3500000  .         .         .
## D Padikkal       .          .         . . .  .         -0.375000  .         .
## DA Miller        .          .         . . . -0.7037037  .         .         .

image(r0_m, main = "Normalized Ratings")

set.seed(1234)
hist(getRatings(r0), breaks=25)

hist(getRatings(r0_m), breaks=25)


## 4a. Data for analysis

The data frame of the batsman vs bowlers from the period 2020 -2022 is read as a dataframe. To remove rows with very low number of ratings(timesOut, SR, Fours, Sixes etc), the rows are filtered so that there are at least more 10 values in the row. For the player estimation the dataframe is converted into a wide-format as a matrix (m x n) of batsman x bowler with each of the columns of the dataframe i.e. timesOut, SR, fours or sixes. These different matrices can be considered as a rating matrix for estimation.

A similar approach is taken for estimating bowler performance. Here a wide form matrix (m x n) of bowler x batsman is created for each of the columns of deliveries, runsConceded, ER, wicketsTaken

## 5. Batsman’s times Out

The code below estimates the number of times the batsmen would lose his/her wicket to the bowler. As discussed in the algorithm above, the recommendation engine will make an initial estimate features for the bowler and an initial estimate for the parameter vector for the batsmen. Then using gradient descent the recommender engine will determine the feature and parameter values such that the over Mean Squared Error is minimum

From the plot for the different algorithms it can be seen that UBCF performs the best. However the AUC & ROC curves are not optimal and the AUC> 0.5

df3 <- select(df, batsman1,bowler1,timesOut)
df6 <- xtabs(timesOut ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
# Filter only rows where the row count is > 10
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## AB de Villiers  . 4 3 . . 3 . 3 . .
## Abdul Samad     . . . . . . . . . .
## Abhishek Sharma . . . . . . . . . .
## AD Russell      1 . . . . . . 6 . .
## AJ Finch        . . . . 3 . . . . .
## AM Rahane       9 . . . . 3 . 3 . .
## AR Patel        . . . . . . . 1 . .
## AT Rayudu       2 . . . . . 1 . . .
## B Kumar         3 . 3 . . . . . . .
## BA Stokes       . . . . . . 3 4 . .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##   1.000   3.000   3.000   3.463   4.000  21.000

# Evaluate the different plotting methods
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))

#Evaluate the error
a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 70 users.
## 18 x 145 rating matrix of class 'realRatingMatrix' with 1755 ratings.
##     RMSE      MSE      MAE
## 2.069027 4.280872 1.496388

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
m=as(c,"data.frame")
names(m) =c("batsman","bowler","TimesOut")


## 6. Batsman’s Strike rate

This section deals with the Strike rate of batsmen versus bowlers and estimates the values for those where the data is incomplete using UBCF method.

Even here all the algorithms do not perform too efficiently. I did try out a few variations but could not lower the error (suggestions welcome!!)

df3 <- select(df, batsman1,bowler1,SR)
df6 <- xtabs(SR ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## AB de Villiers   96.8254 171.4286  33.33333  . 66.66667 223.07692   .
## Abdul Samad       .      228.0000   .        .  .       100.00000   .
## Abhishek Sharma 150.0000   .        .        .  .        66.66667   .
## AD Russell      111.4286   .        .        .  .         .         .
## AJ Finch        250.0000 116.6667   .        . 50.00000  85.71429 112.5000
## AJ Tye            .        .        .        .  .         .       100.0000
## AK Markram        .        .        .       50  .         .         .
## AM Rahane       121.1111   .        .        .  .       113.82979 117.9487
## AR Patel        183.3333   .      200.00000  .  .       433.33333   .
## AT Rayudu       126.5432 200.0000 122.22222  .  .       105.55556   .
##
## AB de Villiers  109.52381 .   .
## Abdul Samad       .       .   .
## Abhishek Sharma   .       .   .
## AD Russell      195.45455 .   .
## AJ Finch          .       .   .
## AJ Tye            .       .   .
## AK Markram        .       .   .
## AM Rahane        33.33333 . 200
## AR Patel        171.42857 .   .
## AT Rayudu       204.76190 .   .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##   5.882  85.714 116.667 128.529 160.606 600.000

evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))

a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 105 users.
## 27 x 145 rating matrix of class 'realRatingMatrix' with 3220 ratings.
##       RMSE        MSE        MAE
##   77.71979 6040.36508   58.58484

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
n=as(c,"data.frame")
names(n) =c("batsman","bowler","SR")


## 7. Batsman’s Sixes

The snippet of code estimes the sixes of the batsman against bowlers. The ROC and AUC curve for UBCF looks a lot better here, as it significantly greater than 0.5

df3 <- select(df, batsman1,bowler1,sixes)
df6 <- xtabs(sixes ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## AB de Villiers  3 3 . . . 18 .  3 . .
## AD Russell      3 . . . .  . . 12 . .
## AJ Finch        2 . . . .  . .  . . .
## AM Rahane       7 . . . .  3 1  . . .
## AR Patel        4 . 3 . .  6 .  1 . .
## AT Rayudu       5 2 . . .  . .  1 . .
## BA Stokes       . . . . .  . .  . . .
## CA Lynn         . . . . .  . .  9 . .
## CH Gayle       17 . . . . 17 .  . . .
## CH Morris       . . 3 . .  . .  . . .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    1.00    3.00    3.00    4.68    6.00   33.00

evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))

## Timing stopped at: 0.003 0 0.002

a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 52 users.
## 14 x 145 rating matrix of class 'realRatingMatrix' with 1634 ratings.
##      RMSE       MSE       MAE
##  3.529922 12.460350  2.532122

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
o=as(c,"data.frame")
names(o) =c("batsman","bowler","Sixes")


## 8. Batsman’s Fours

The code below estimates 4s for the batsmen

df3 <- select(df, batsman1,bowler1,fours)
df6 <- xtabs(fours ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## AB de Villiers   . 1 . . . 24 . 3 . .
## Abhishek Sharma  . . . . .  . . . . .
## AD Russell       1 . . . .  . . 9 . .
## AJ Finch         . 1 . . .  3 2 . . .
## AK Markram       . . . . .  . . . . .
## AM Rahane       11 . . . .  8 7 . . 3
## AR Patel         . . . . .  . . 3 . .
## AT Rayudu       11 2 3 . .  6 . 6 . .
## BA Stokes        1 . . . .  . . . . .
## CA Lynn          . . . . .  . . 6 . .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##   1.000   3.000   4.000   6.339   9.000  55.000

evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))

## Timing stopped at: 0.008 0 0.008

## Warning in .local(x, method, ...):
##   Recommender 'UBCF Pearson' has failed and has been removed from the results!

a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 67 users.
## 17 x 145 rating matrix of class 'realRatingMatrix' with 2083 ratings.
##      RMSE       MSE       MAE
##  5.486661 30.103447  4.060990

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
p=as(c,"data.frame")
names(p) =c("batsman","bowler","Fours")


## 9. Batsman’s Total Runs

The code below estimates the total runs that would have scored by the batsman against different bowlers

df3 <- select(df, batsman1,bowler1,totalRuns)
df6 <- xtabs(totalRuns ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## A Badoni         .  . . . .   . .   . . .
## A Manohar        .  . . . .   . .   . . .
## A Nortje         .  . . . .   . .   . . .
## AB de Villiers  61 36 3 . 6 261 .  69 . .
## Abdul Samad      . 57 . . .  12 .   . . .
## Abhishek Sharma  3  . . . .   6 .   . . .
## AD Russell      39  . . . .   . . 129 . .
## AF Milne         .  . . . .   . .   . . .
## AJ Finch        15  7 . . 3  18 9   . . .
## AJ Tye           .  . . . .   . 4   . . .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    1.00    9.00   24.00   41.36   54.00  452.00

evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given1=7,goodRating1=median(getRatings(r0)))

a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 105 users.
## 27 x 145 rating matrix of class 'realRatingMatrix' with 3256 ratings.
##       RMSE        MSE        MAE
##   41.50985 1723.06788   29.52958

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
q=as(c,"data.frame")
names(q) =c("batsman","bowler","TotalRuns")


## 10. Batsman’s Balls Faced

The snippet estimates the balls faced by batsmen versus bowlers

df3 <- select(df, batsman1,bowler1,ballsFaced)
df6 <- xtabs(ballsFaced ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]

##
## A Badoni         .  . . . .   . .  . . .
## A Manohar        .  . . . .   . .  . . .
## A Nortje         .  . . . .   . .  . . .
## AB de Villiers  63 21 9 . 9 117 . 63 . .
## Abdul Samad      . 25 . . .  12 .  . . .
## Abhishek Sharma  2  . . . .   9 .  . . .
## AD Russell      35  . . . .   . . 66 . .
## AF Milne         .  . . . .   . .  . . .
## AJ Finch         6  6 . . 6  21 8  . . .
## AJ Tye           .  . . . .   9 4  . . .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    1.00    9.00   18.00   30.21   39.00  384.00

evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))

a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 112 users.
## 28 x 145 rating matrix of class 'realRatingMatrix' with 3378 ratings.
##       RMSE        MSE        MAE
##   33.91251 1150.05835   23.39439

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
r=as(c,"data.frame")
names(r) =c("batsman","bowler","BallsFaced")


## 11. Generate the Batsmen Performance Estimate

This code generates the estimated dataframe with known and ‘predicted’ values

a1=merge(m,n,by=c("batsman","bowler"))
a2=merge(a1,o,by=c("batsman","bowler"))
a3=merge(a2,p,by=c("batsman","bowler"))
a4=merge(a3,q,by=c("batsman","bowler"))
a5=merge(a4,r,by=c("batsman","bowler"))
a6= select(a5, batsman,bowler,BallsFaced,TotalRuns,Fours, Sixes, SR,TimesOut)

##          batsman          bowler BallsFaced TotalRuns Fours Sixes  SR TimesOut
## 1 AB de Villiers        A Mishra         94       124     7     5 144        5
## 2 AB de Villiers        A Nortje         26        42     4     3 148        3
## 3 AB de Villiers         A Zampa         28        42     5     7 106        4
## 4 AB de Villiers Abhishek Sharma         22        28     0    10 136        5
## 5 AB de Villiers      AD Russell         70       135    14    12 207        4
## 6 AB de Villiers        AF Milne         31        45     6     6 130        3


## 12. Bowler analysis

Just like the batsman performance estimation we can consider the bowler’s performances also for estimation. Consider the following table

As in the batsman analysis, for every batsman a set of features like (“strong backfoot player”, “360 degree player”,“Power hitter”) can be estimated with a set of initial values. Also every bowler will have an associated parameter vector θθ. Different bowlers will have performance data for different set of batsmen. Based on the initial estimate of the features and the parameters, gradient descent can be used to minimize actual values {for e.g. wicketsTaken(ratings)}.

load("recom_data/bowlerVsBatsman20_22.rdata")


## 12a. Bowler dataframe

Inspecting the bowler dataframe

head(df2)

##    bowler1        batsman1 balls runsConceded       ER wicketTaken
## 1 A Mishra        A Badoni     0            0 0.000000           0
## 2 A Mishra       A Manohar     0            0 0.000000           0
## 3 A Mishra        A Nortje     0            0 0.000000           0
## 4 A Mishra  AB de Villiers    63           61 5.809524           0
## 5 A Mishra     Abdul Samad     0            0 0.000000           0
## 6 A Mishra Abhishek Sharma     2            3 9.000000           0

names(df2)

## [1] "bowler1"      "batsman1"     "balls"        "runsConceded" "ER"
## [6] "wicketTaken"


## 13. Balls bowled by bowler

The below section estimates the balls bowled for each bowler. We can see that UBCF Pearson and UBCF Cosine both perform well

df3 <- select(df2, bowler1,batsman1,balls)
df6 <- xtabs(balls ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Badoni', 'A Manohar', 'A Nortje' ... ]]

##
## A Mishra        . . .  63  .  2 35 .  6 .
## A Nortje        . . .  21 25  .  . .  6 .
## A Zampa         . . .   9  .  .  . .  . .
## Abhishek Sharma . . .   9  .  .  . .  6 .
## AD Russell      . . . 117 12  9  . . 21 9
## AF Milne        . . .   .  .  .  . .  8 4
## AJ Tye          . . .  63  .  . 66 .  . .
## Akash Deep      . . .   .  .  .  . .  . .
## AR Patel        . . . 188  5  1 84 . 29 5
## Arshdeep Singh  . . .   6  6 24 18 . 12 .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    1.00    9.00   18.00   29.61   36.00  384.00

evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))

a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 96 users.
## 24 x 195 rating matrix of class 'realRatingMatrix' with 3954 ratings.
##      RMSE       MSE       MAE
##  30.72284 943.89294  19.89204

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
s=as(c,"data.frame")
names(s) =c("bowler","batsman","BallsBowled")


## 14. Runs conceded by bowler

This section estimates the runs conceded by the bowler. The UBCF Cosinus algorithm performs the best with TPR increasing fastewr than FPR

df3 <- select(df2, bowler1,batsman1,runsConceded)
df6 <- xtabs(runsConceded ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Badoni', 'A Manohar', 'A Nortje' ... ]]

##
## A Mishra        . . .  61  .  3  41 . 15  .
## A Nortje        . . .  36 57  .   . .  8  .
## A Zampa         . . .   3  .  .   . .  .  .
## Abhishek Sharma . . .   6  .  .   . .  3  .
## AD Russell      . . . 276 12  6   . . 21  .
## AF Milne        . . .   .  .  .   . . 10  4
## AJ Tye          . . .  69  .  . 138 .  .  .
## Akash Deep      . . .   .  .  .   . .  .  .
## AR Patel        . . . 205  5  . 165 . 33 13
## Arshdeep Singh  . . .  18  3 51  51 .  6  .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    1.00    9.00   24.00   41.34   54.00  458.00

evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))

## Timing stopped at: 0.004 0 0.004

## Warning in .local(x, method, ...):
##   Recommender 'UBCF Pearson' has failed and has been removed from the results!

a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 95 users.
## 24 x 195 rating matrix of class 'realRatingMatrix' with 3820 ratings.
##       RMSE        MSE        MAE
##   43.16674 1863.36749   30.32709

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
t=as(c,"data.frame")
names(t) =c("bowler","batsman","RunsConceded")


## 15. Economy Rate of the bowler

This section computes the economy rate of the bowler. The performance is not all that good

df3 <- select(df2, bowler1,batsman1,ER)
df6 <- xtabs(ER ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Badoni', 'A Manohar', 'A Nortje' ... ]]

##
## A Mishra        . . .  5.809524  .     9.00  7.028571 . 15.000000  .
## A Nortje        . . . 10.285714 13.68  .     .        .  8.000000  .
## A Zampa         . . .  2.000000  .     .     .        .  .         .
## Abhishek Sharma . . .  4.000000  .     .     .        .  3.000000  .
## AD Russell      . . . 14.153846  6.00  4.00  .        .  6.000000  .
## AF Milne        . . .  .         .     .     .        .  7.500000  6.0
## AJ Tye          . . .  6.571429  .     .    12.545455 .  .         .
## Akash Deep      . . .  .         .     .     .        .  .         .
## AR Patel        . . .  6.542553  6.00  .    11.785714 .  6.827586 15.6
## Arshdeep Singh  . . . 18.000000  3.00 12.75 17.000000 .  3.000000  .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##  0.3529  5.2500  7.1126  7.8139  9.8000 36.0000

evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))

## Timing stopped at: 0.003 0 0.004

## Warning in .local(x, method, ...):
##   Recommender 'UBCF Pearson' has failed and has been removed from the results!

a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 95 users.
## 24 x 195 rating matrix of class 'realRatingMatrix' with 3839 ratings.
##      RMSE       MSE       MAE
##  4.380680 19.190356  3.316556

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
u=as(c,"data.frame")
names(u) =c("bowler","batsman","EconomyRate")


## 16. Wickets Taken by bowler

The code below computes the wickets taken by the bowler versus different batsmen

df3 <- select(df2, bowler1,batsman1,wicketTaken)
df6 <- xtabs(wicketTaken ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]

## 10 x 10 sparse Matrix of class "dgCMatrix"

##    [[ suppressing 10 column names 'A Badoni', 'A Manohar', 'A Nortje' ... ]]

##
## A Mishra       . . . . . . 1 . . .
## A Nortje       . . . 4 . . . . . .
## A Zampa        . . . 3 . . . . . .
## AD Russell     . . . 3 . . . . . .
## AJ Tye         . . . 3 . . 6 . . .
## AR Patel       . . . 4 . 1 3 . 1 1
## Arshdeep Singh . . . 3 . . 3 . . .
## AS Rajpoot     . . . . . . 3 . . .
## Avesh Khan     . . . . . . 1 . 3 .
## B Kumar        . . . 9 . . 3 . 1 .

summary(getRatings(r0))

##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##   1.000   3.000   3.000   3.423   3.000  21.000

evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))

## Timing stopped at: 0.003 0 0.003

## Warning in .local(x, method, ...):
##   Recommender 'UBCF Pearson' has failed and has been removed from the results!

a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")

## Recommender of type 'UBCF' for 'realRatingMatrix'
## learned using 64 users.
## 16 x 195 rating matrix of class 'realRatingMatrix' with 1908 ratings.
##     RMSE      MSE      MAE
## 2.672677 7.143203 1.956934

b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
v=as(c,"data.frame")
names(v) =c("bowler","batsman","WicketTaken")


## 17. Generate the Bowler Performance estmiate

The entire dataframe is regenerated with known and ‘predicted’ values

r1=merge(s,t,by=c("bowler","batsman"))
r2=merge(r1,u,by=c("bowler","batsman"))
r3=merge(r2,v,by=c("bowler","batsman"))
r4= select(r3,bowler, batsman, BallsBowled,RunsConceded,EconomyRate, WicketTaken)

##     bowler         batsman BallsBowled RunsConceded EconomyRate WicketTaken
## 1 A Mishra  AB de Villiers         102          144           8           4
## 2 A Mishra     Abdul Samad          13           20           7           4
## 3 A Mishra Abhishek Sharma          14           26           8           2
## 4 A Mishra      AD Russell          47           85           9           3
## 5 A Mishra        AJ Finch          45           61          11           4
## 6 A Mishra          AJ Tye          14           20           5           4


## 18. Conclusion

This post showed an approach for performing the Batsmen Performance Estimate & Bowler Performance Estimate. The performance of the recommender engine could have been better. In any case, I think this approach will work for player estimation provided the recommender algorithm is able to achieve a high degree of accuracy. This will be a good way to estimate as the algorithm will be able to determine features and nuances of batsmen and bowlers which cannot be captured by data.

## Also see

To see all posts click Index of posts

# Deconstructing Convolutional Neural Networks with Tensorflow and Keras

I have been very fascinated by how Convolution Neural  Networks have been able to, so efficiently,  do image classification and image recognition CNN’s have been very successful in in both these tasks. A good paper that explores the workings of a CNN Visualizing and Understanding Convolutional Networks  by Matthew D Zeiler and Rob Fergus. They show how through a reverse process of convolution using a deconvnet.

In their paper they show how by passing the feature map through a deconvnet ,which does the reverse process of the convnet, they can reconstruct what input pattern originally caused a given activation in the feature map

In the paper they say “A deconvnet can be thought of as a convnet model that uses the same components (filtering, pooling) but in reverse, so instead of mapping pixels to features, it does the opposite. An input image is presented to the CNN and features  activation computed throughout the layers. To examine a given convnet activation, we set all other activations in the layer to zero and pass the feature maps as input to the attached deconvnet layer. Then we successively (i) unpool, (ii) rectify and (iii) filter to reconstruct the activity in the layer beneath that gave rise to the chosen activation. This is then repeated until input pixel space is reached.”

I started to scout the internet to see how I can implement this reverse process of Convolution to understand what really happens under the hood of a CNN.  There are a lot of good articles and blogs, but I found this post Applied Deep Learning – Part 4: Convolutional Neural Networks take the visualization of the CNN one step further.

This post takes VGG16 as the pre-trained network and then uses this network to display the intermediate visualizations.  While this post was very informative and also the visualizations of the various images were very clear, I wanted to simplify the problem for my own understanding.

Hence I decided to take the MNIST digit classification as my base problem. I created a simple 3 layer CNN which gives close to 99.1% accuracy and decided to see if I could do the visualization.

As mentioned in the above post, there are 3 major visualisations

1. Feature activations at the layer
2. Visualisation of the filters
3. Visualisation of the class outputs

Feature Activation – This visualization the feature activation at the 3 different layers for a given input image. It can be seen that first layer  activates based on the edge of the image. Deeper layers activate in a more abstract way.

Visualization of the filters: This visualization shows what patterns the filters respond maximally to. This is implemented in Keras here

To do this the following is repeated in a loop

• Choose a loss function that maximizes the value of a convnet filter activation
• Do gradient ascent (maximization) in input space that increases the filter activation

Visualizing Class Outputs of the MNIST Convnet: This process is similar to determining the filter activation. Here the convnet is made to generate an image that represents the category maximally.

You can access the Google colab notebook here – Deconstructing Convolutional Neural Networks in Tensoflow and Keras

import numpy as np
import pandas as pd
import os
import tensorflow as tf
import matplotlib.pyplot as plt
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D, Input
from keras.models import Model
from sklearn.model_selection import train_test_split
from keras.utils import np_utils

Using TensorFlow backend.
In [0]:
mnist=tf.keras.datasets.mnist
# Set training and test data and labels

In [0]:
#Normalize training data
X =np.array(training_images).reshape(training_images.shape[0],28,28,1)
# Normalize the images by dividing by 255.0
X = X/255.0
X.shape
# Use one hot encoding for the labels
Y = np_utils.to_categorical(training_labels, 10)
Y.shape
# Split training data into training and validation data in the ratio of 80:20
X_train, X_validation, y_train, y_validation = train_test_split(X,Y,test_size=0.20, random_state=42)

In [4]:
# Normalize test data
X_test =np.array(test_images).reshape(test_images.shape[0],28,28,1)
X_test=X_test/255.0
#Use OHE for the test labels
Y_test = np_utils.to_categorical(test_labels, 10)
X_test.shape

Out[4]:
(10000, 28, 28, 1)

# Display data

Display the training data and the corresponding labels

In [5]:
print(training_labels[0:10])
f, axes = plt.subplots(1, 10, sharey=True,figsize=(10,10))
for i,ax in enumerate(axes.flat):
ax.axis('off')
ax.imshow(X[i,:,:,0],cmap="gray")



# Create a Convolutional Neural Network

The CNN consists of 3 layers

• Conv2D of size 28 x 28 with 24 filters
• Perform Max pooling
• Conv2D of size 14 x 14 with 48 filters
• Perform max pooling
• Conv2d of size 7 x 7 with 64 filters
• Flatten
• Use Dense layer with 128 units
• Perform 25% dropout
• Perform categorical cross entropy with softmax activation function
In [0]:
num_classes=10
inputs = Input(shape=(28,28,1))
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(48, (3, 3), padding='same',activation='relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Conv2D(64, (3, 3), padding='same',activation='relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.25)(x)
output = Dense(num_classes,activation="softmax")(x)

model = Model(inputs,output)

model.compile(loss='categorical_crossentropy',
metrics=['accuracy'])


# Summary of CNN

Display the summary of CNN

In [7]:
model.summary()
Model: "model_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         (None, 28, 28, 1)         0
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 28, 28, 24)        240
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 14, 14, 24)        0
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 14, 14, 48)        10416
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 7, 7, 48)          0
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 7, 7, 64)          27712
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 3, 3, 64)          0
_________________________________________________________________
flatten_1 (Flatten)          (None, 576)               0
_________________________________________________________________
dense_1 (Dense)              (None, 128)               73856
_________________________________________________________________
dropout_1 (Dropout)          (None, 128)               0
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290
=================================================================
Total params: 113,514
Trainable params: 113,514
Non-trainable params: 0
_________________________________________________________________


# Perform Gradient descent and validate with the validation data

In [8]:
epochs = 20
batch_size=256
history = model.fit(X_train,y_train,
epochs=epochs,
batch_size=batch_size,
validation_data=(X_validation,y_validation))
————————————————
acc = history.history[ ‘accuracy’ ]
val_acc = history.history[ ‘val_accuracy’ ]
loss = history.history[ ‘loss’ ]
val_loss = history.history[‘val_loss’ ]
epochs = range(len(acc)) # Get number of epochs
#————————————————
# Plot training and validation accuracy per epoch
#————————————————
plt.plot ( epochs, acc,label=”training accuracy” )
plt.plot ( epochs, val_acc, label=’validation acuracy’ )
plt.title (‘Training and validation accuracy’)
plt.legend()
plt.figure()
#————————————————
# Plot training and validation loss per epoch
#————————————————
plt.plot ( epochs, loss , label=”training loss”)
plt.plot ( epochs, val_loss,label=”validation loss” )
plt.title (‘Training and validation loss’ )
plt.legend()
Test model on test data
f, axes = plt.subplots(1, 10, sharey=True,figsize=(10,10))
for i,ax in enumerate(axes.flat):
ax.axis(‘off’)
ax.imshow(X_test[i,:,:,0],cmap=”gray”)
l=[]
for i in range(10):
x=X_test[i].reshape(1,28,28,1)
y=model.predict(x)
m = np.argmax(y, axis=1)
print(m)

[7]
[2]
[1]
[0]
[4]
[1]
[4]
[9]
[5]
[9]


# Generate the filter activations at the intermediate CNN layers

In [12]:
img = test_images[51].reshape(1,28,28,1)
fig = plt.figure(figsize=(5,5))
print(img.shape)
plt.imshow(img[0,:,:,0],cmap="gray")
plt.axis('off')


# Display the activations at the intermediate layers

This displays the intermediate activations as the image passes through the filters and generates these feature maps

In [13]:
layer_names = ['conv2d_4', 'conv2d_5', 'conv2d_6']

layer_outputs = [layer.output for layer in model.layers if 'conv2d' in layer.name]
activation_model = Model(inputs=model.input,outputs=layer_outputs)
intermediate_activations = activation_model.predict(img)
images_per_row = 8
max_images = 8

for layer_name, layer_activation in zip(layer_names, intermediate_activations):
print(layer_name,layer_activation.shape)
n_features = layer_activation.shape[-1]
print("features=",n_features)
n_features = min(n_features, max_images)
print(n_features)

size = layer_activation.shape[1]
print("size=",size)
n_cols = n_features // images_per_row
display_grid = np.zeros((size * n_cols, images_per_row * size))

for col in range(n_cols):
for row in range(images_per_row):
channel_image = layer_activation[0,:, :, col * images_per_row + row]

channel_image -= channel_image.mean()
channel_image /= channel_image.std()
channel_image *= 64
channel_image += 128
channel_image = np.clip(channel_image, 0, 255).astype('uint8')
display_grid[col * size : (col + 1) * size,
row * size : (row + 1) * size] = channel_image
scale = 2. / size
plt.figure(figsize=(scale * display_grid.shape[1],
scale * display_grid.shape[0]))
plt.axis('off')
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')

plt.show()

It can be seen that at the higher layers only abstract features of the input image are captured

# To fix the ImportError: cannot import name 'imresize' in the next cell. Run this cell. Then comment and restart and run all
#!pip install scipy==1.1.0


## Visualize the pattern that the filters respond to maximally

• Choose a loss function that maximizes the value of the CNN filter in a given layer
• Start from a blank input image.
• Do gradient ascent in input space. Modify input values so that the filter activates more
• Repeat this in a loop.
In [14]:
from vis.visualization import visualize_activation, get_num_filters
from vis.utils import utils
from vis.input_modifiers import Jitter

max_filters = 24
selected_indices = []
vis_images = [[], [], [], [], []]
i = 0
selected_filters = [[0, 3, 11, 15, 16, 17, 18, 22],
[8, 21, 23, 25, 31, 32, 35, 41],
[2, 7, 11, 14, 19, 26, 35, 48]]

# Set the layers
layer_name = ['conv2d_4', 'conv2d_5', 'conv2d_6']
# Set the layer indices
layer_idx = [1,3,5]
for layer_name,layer_idx in zip(layer_name,layer_idx):

# Visualize all filters in this layer.
if selected_filters:
filters = selected_filters[i]
else:
# Randomly select filters
filters = sorted(np.random.permutation(get_num_filters(model.layers[layer_idx]))[:max_filters])
selected_indices.append(filters)

# Generate input image for each filter.
# Loop through the selected filters in each layer and generate the activation of these filters
for idx in filters:
img = visualize_activation(model, layer_idx, filter_indices=idx, tv_weight=0.,
input_modifiers=[Jitter(0.05)], max_iter=300)
vis_images[i].append(img)

# Generate stitched image palette with 4 cols so we get 2 rows.
stitched = utils.stitch_images(vis_images[i], cols=4)
plt.figure(figsize=(20, 30))
plt.title(layer_name)
plt.axis('off')
stitched = stitched.reshape(1,61,127,1)
plt.imshow(stitched[0,:,:,0])
plt.show()
i += 1
from vis.utils import utils
new_vis_images = [[], [], [], [], []]
i = 0
layer_name = ['conv2d_4', 'conv2d_5', 'conv2d_6']
layer_idx = [1,3,5]
for layer_name,layer_idx in zip(layer_name,layer_idx):

# Generate input image for each filter.
for j, idx in enumerate(selected_indices[i]):
img = visualize_activation(model, layer_idx, filter_indices=idx,
seed_input=vis_images[i][j], input_modifiers=[Jitter(0.05)], max_iter=300)
#img = utils.draw_text(img, 'Filter {}'.format(idx))
new_vis_images[i].append(img)

stitched = utils.stitch_images(new_vis_images[i], cols=4)
plt.figure(figsize=(20, 30))
plt.title(layer_name)
plt.axis('off')
print(stitched.shape)
stitched = stitched.reshape(1,61,127,1)
plt.imshow(stitched[0,:,:,0])
plt.show()
i += 1



## Visualizing Class Outputs

Here the CNN will generate the image that maximally represents the category. Each of the output represents one of the digits as can be seen below

In [16]:
from vis.utils import utils
from keras import activations
codes = '''
zero 0
one 1
two 2
three 3
four 4
five 5
six 6
seven 7
eight 8
nine 9
'''
layer_idx=10
initial = []
images = []
tuples = []
# Swap softmax with linear for better visualization
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
for line in codes.split('\n'):
if not line:
continue
name, idx = line.rsplit(' ', 1)
idx = int(idx)
img = visualize_activation(model, layer_idx, filter_indices=idx,
tv_weight=0., max_iter=300, input_modifiers=[Jitter(0.05)])

initial.append(img)
tuples.append((name, idx))

i = 0
for name, idx in tuples:
img = visualize_activation(model, layer_idx, filter_indices=idx,
seed_input = initial[i], max_iter=300, input_modifiers=[Jitter(0.05)])
#img = utils.draw_text(img, name) # Unable to display text on gray scale image
i += 1
images.append(img)

stitched = utils.stitch_images(images, cols=4)
plt.figure(figsize=(20, 20))
plt.axis('off')
stitched = stitched.reshape(1,94,127,1)
plt.imshow(stitched[0,:,:,0])

plt.show()



In the grid below the class outputs show the MNIST digits to which output responds to maximally. We can see the ghostly outline
of digits 0 – 9. We can clearly see the outline if 0,1, 2,3,4,5 (yes, it is there!),6,7, 8 and 9. If you look at this from a little distance the digits are clearly visible. Isn’t that really cool!!

## Conclusion:

It is really interesting to see the class outputs which show the image to which the class output responds to maximally. In the
post Applied Deep Learning – Part 4: Convolutional Neural Networks the class output show much more complicated images and is worth a look. It is really interesting to note that the model has adjusted the filter values and the weights of the fully connected network to maximally respond to the MNIST digits

## Also see

To see all posts click Index of posts

# Understanding Neural Style Transfer with Tensorflow and Keras

Neural Style Transfer (NST)  is a fascinating area of Deep Learning and Convolutional Neural Networks. NST is an interesting technique, in which the style from an image, known as the ‘style image’ is transferred to another image ‘content image’ and we get a third a image which is a generated image which has the content of the original image and the style of another image.

NST can be used to reimagine how famous painters like Van Gogh, Claude Monet or a Picasso would have visualised a scenery or architecture. NST uses Convolutional Neural Networks (CNNs) to achieve this artistic style transfer from one image to another. NST was originally implemented by Gati et al., in their paper Neural Algorithm of Artistic Style. Convolutional Neural Networks have been very successful in image classification image recognition et cetera. CNN networks have also been have also generated very interesting pictures using Neural Style Transfer which will be shown in this post. An interesting aspect of CNN’s is that the first couple of layers in the CNN capture basic features of the image like edges and  pixel values. But as we go deeper into the CNN, the network captures higher level features of the input image.

To get started with Neural Style transfer  we will be using the VGG19 pre-trained network. The VGG19 CNN is a compact pre-trained your network which can be used for performing the NST. However, we could have also used Resnet or InceptionV3 networks for this purpose but these are very large networks. The idea of using a network trained on a different task and applying it to a new task is called transfer learning.

What needs to be done to transfer the style from one of the image to another image. This brings us to the question – What is ‘style’? What is it that distinguishes Van Gogh’s painting or Picasso’s cubist art. Convolutional Neural Networks capture basic features in the lower layers and much more complex features in the deeper layers.  Style can be computed by taking the correlation of the feature maps in a layer L. This is my interpretation of how style is captured.  Since style  is intrinsic to  the image, it  implies that the style feature would exist across all the filters in a layer. Hence, to pick up this style we would need to get the correlation of the filters across channels of a lawyer. This is computed mathematically, using the Gram matrix which calculates the correlation of the activation of a the filter by the style image and generated image

To transfer the style from one image to the content image we need to do two parallel operations while doing forward propagation
– Compute the content loss between the source image and the generated image
– Compute the style loss between the style image and the generated image
– Finally we need to compute the total loss

In order to get transfer the style from the ‘style’ image to the ‘content ‘image resulting in a  ‘generated’  image  the total loss has to be minimised. Therefore backward propagation with gradient descent  is done to minimise the total loss comprising of the content and style loss.

Initially we make the Generated Image ‘G’ the same as the source image ‘S’

The content loss at layer ‘l’

$L_{content} = 1/2 \sum_{i}^{j} ( F^{l}_{i,j} - P^{l}_{i,j})^{2}$

where $F^{l}_{i,j}$ and $P^{l}_{i,j}$ represent the activations at layer ‘l’ in a filter i, at position ‘j’. The intuition is that the activations will be same for similar source and generated image. We need to minimise the content loss so that the generated stylized image is as close to the original image as possible. An intermediate layer of VGG19 block5_conv2 is used

The Style layers that are are used are

style_layers = [‘block1_conv1’,
‘block2_conv1’,
‘block3_conv1’,
‘block4_conv1’,
‘block5_conv1’]
To compute the Style Loss the Gram matrix needs to be computed. The Gram Matrix is computed by unrolling the filters as shown below (source: Convolutional Neural Networks by Prof Andrew Ng, Coursera). The result is a matrix of size $n_{c}$ x $n_{c}$ where $n_{c}$ is the number of channels
The above diagram shows the filters of height $n_{H}$ and width $n_{W}$ with $n_{C}$ channels
The contribution of layer ‘l’ to style loss is given by
$L^{'}_{style} = \frac{\sum_{i}^{j} (G^{2}_{i,j} - A^l{i,j})^2}{4N^{2}_{l}M^{2}_{l}}$
where $G_{i,j}$  and $A_{i,j}$ are the Gram matrices of the style and generated images respectively. By minimising the distance in the gram matrices of the style and generated image we can ensure that generated image is a stylized version of the original image similar to the style image
The total loss is given by
$L_{total} = \alpha L_{content} + \beta L_{style}$
Back propagation with gradient descent works to minimise the content loss between the source and generated image, while the style loss tries to minimise the discrepancies in the style of the style image and generated image. Running through forward and backpropagation through several epochs successfully transfers the style from the style image to the source image.
You can check the Notebook at Neural Style Transfer

Note: The code in this notebook is largely based on the Neural Style Transfer tutorial from Tensorflow, though I may have taken some changes from other blogs. I also made a few changes to the code in this tutorial, like removing the scaling factor, or the class definition (Personally, I belong to the old school (C language) and am not much in love with the ‘self.”..All references are included below

Note: Here is a interesting thought. Could we do a Neural Style Transfer in music? Imagine Carlos Santana playing ‘Hotel California’ or Brian May style in ‘Another brick in the wall’. While our first reaction would be that it may not sound good as we are used to style of these songs, we may be surprised by a possible style transfer. This is definitely music to the ears!

Here are few runs from this

## A) Run 1

1. Neural Style Transfer – a) Content Image – My portrait.  b) Style Image – Wassily Kadinsky Oil on canvas, 1913, Vassily Kadinsky’s composition

2. Result of Neural Style Transfer

2) Run 2

a) Content Image – Portrait of my parents b) Style Image –  Vincent Van Gogh’s ,Starry Night Oil on canvas 1889

2. Result of Neural Style Transfer

## Run 3

1.  Content Image – Caesar 2 (Masai Mara- 20 Jun 2018).  Style Image – The Great Wave at Kanagawa – Katsushika Hokosai, 1826-1833

2. Result of Neural Style Transfer

## Run 4

1.   Content Image – Junagarh Fort , Rajasthan   Sep 2016              b) Style Image – Le Pont Japonais by Claude Monet, Oil on canvas, 1920

2. Result of Neural Style Transfer

Neural Style Transfer is a very ingenious idea which shows that we can segregate the style of a painting and transfer to another image.

### References

1. A Neural Algorithm of Artistic Style, Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
2. Neural style transfer
3. Neural Style Transfer: Creating Art with Deep Learning using tf.keras and eager execution
4. Convolutional Neural Network, DeepLearning.AI Specialization, Prof Andrew Ng
5. Intuitive Guide to Neural Style Transfer

To see all posts click Index of posts

# Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8

“Lights, camera and … action – Take 4+!”

This post includes  a rework of all presentation of ‘Elements of Neural Networks and Deep  Learning Parts 1-8 ‘ since my earlier presentations had some missing parts, omissions and some occasional errors. So I have re-recorded all the presentations.
This series of presentation will do a deep-dive  into Deep Learning networks starting from the fundamentals. The equations required for performing learning in a L-layer Deep Learning network  are derived in detail, starting from the basics. Further, the presentations also discuss multi-class classification, regularization techniques, and gradient descent optimization methods in deep networks methods. Finally the presentations also touch on how  Deep Learning Networks can be tuned.

The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

1. Elements of Neural Networks and Deep Learning – Part 1
This presentation introduces Neural Networks and Deep Learning. A look at history of Neural Networks, Perceptrons and why Deep Learning networks are required and concluding with a simple toy examples of a Neural Network and how they compute. This part also includes a small digression on the basics of Machine Learning and how the algorithm learns from a data set

2. Elements of Neural Networks and Deep Learning – Part 2
This presentation takes logistic regression as an example and creates an equivalent 2 layer Neural network. The presentation also takes a look at forward & backward propagation and how the cost is minimized using gradient descent

The implementation of the discussed 2 layer Neural Network in vectorized R, Python and Octave are available in my post ‘Deep Learning from first principles in Python, R and Octave – Part 1‘

3. Elements of Neural Networks and Deep Learning – Part 3
This 3rd part, discusses a primitive neural network with an input layer, output layer and a hidden layer. The neural network uses tanh activation in the hidden layer and a sigmoid activation in the output layer. The equations for forward and backward propagation are derived.

To see the implementations for the above discussed video see my post ‘Deep Learning from first principles in Python, R and Octave – Part 2

4. Elements of Neural Network and Deep Learning – Part 4
This presentation is a continuation of my 3rd presentation in which I derived the equations for a simple 3 layer Neural Network with 1 hidden layer. In this video presentation, I discuss step-by-step the derivations for a L-Layer, multi-unit Deep Learning Network, with any activation function g(z)

The implementations of L-Layer, multi-unit Deep Learning Network in vectorized R, Python and Octave are available in my post Deep Learning from first principles in Python, R and Octave – Part 3

5. Elements of Neural Network and Deep Learning – Part 5
This presentation discusses multi-class classification using the Softmax function. The detailed derivation for the Jacobian of the Softmax is discussed, and subsequently the derivative of cross-entropy loss is also discussed in detail. Finally the final set of equations for a Neural Network with multi-class classification is derived.

The corresponding implementations in vectorized R, Python and Octave are available in the following posts
a. Deep Learning from first principles in Python, R and Octave – Part 4
b. Deep Learning from first principles in Python, R and Octave – Part 5

6. Elements of Neural Networks and Deep Learning – Part 6
This part discusses initialization methods specifically like He and Xavier. The presentation also focuses on how to prevent over-fitting using regularization. Lastly the dropout method of regularization is also discussed

The corresponding implementations in vectorized R, Python and Octave of the above discussed methods are available in my post Deep Learning from first principles in Python, R and Octave – Part 6

7. Elements of Neural Networks and Deep Learning – Part 7
This presentation introduces exponentially weighted moving average and shows how this is used in different approaches to gradient descent optimization. The key techniques discussed are learning rate decay, momentum method, rmsprop and adam.

The equivalent implementations of the gradient descent optimization techniques in R, Python and Octave can be seen in my post Deep Learning from first principles in Python, R and Octave – Part 7

8. Elements of Neural Networks and Deep Learning – Part 8
This last part touches on the method to adopt while tuning hyper-parameters in Deep Learning networks

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

This concludes this series of presentations on “Elements of Neural Networks and Deep Learning’

To see all posts click Index of posts

# My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 6,7,8

This is the final set of presentations in my series ‘Elements of Neural Networks and Deep Learning’. This set follows the earlier 2 sets of presentations namely
1. My presentations on ‘Elements of Neural Networks & Deep Learning’ -Part1,2,3
2. My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 4,5

In this final set of presentations I discuss initialization methods, regularization techniques including dropout. Next I also discuss gradient descent optimization methods like momentum, rmsprop, adam etc. Lastly, I briefly also touch on hyper-parameter tuning approaches. The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

1. Elements of Neural Networks and Deep Learning – Part 6
This part discusses initialization methods specifically like He and Xavier. The presentation also focuses on how to prevent over-fitting using regularization. Lastly the dropout method of regularization is also discusses

The corresponding implementations in vectorized R, Python and Octave of the above discussed methods are available in my post Deep Learning from first principles in Python, R and Octave – Part 6

2. Elements of Neural Networks and Deep Learning – Part 7
This presentation introduces exponentially weighted moving average and shows how this is used in different approaches to gradient descent optimization. The key techniques discussed are learning rate decay, momentum method, rmsprop and adam.

The equivalent implementations of the gradient descent optimization techniques in R, Python and Octave can be seen in my post Deep Learning from first principles in Python, R and Octave – Part 7

3. Elements of Neural Networks and Deep Learning – Part 8
This last part touches upon hyper-parameter tuning in Deep Learning networks

This concludes this series of presentations on “Elements of Neural Networks and Deep Learning’

Important note: Do check out my later version of these videos at Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8 . These have more content and also include some corrections. Check it out!

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and and in kindle version($9.99/Rs449).

To see all posts click Index of posts

# My presentations on ‘Elements of Neural Networks & Deep Learning’ -Parts 4,5

This is the next set of presentations on “Elements of Neural Networks and Deep Learning”.  In the 4th presentation I discuss and derive the generalized equations for a multi-unit, multi-layer Deep Learning network.  The 5th presentation derives the equations for a Deep Learning network when performing multi-class classification along with the derivations for cross-entropy loss. The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

Important note: Do check out my later version of these videos at Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8 . These have more content and also include some corrections. Check it out!

1. Elements of Neural Network and Deep Learning – Part 4
This presentation is a continuation of my 3rd presentation in which I derived the equations for a simple 3 layer Neural Network with 1 hidden layer. In this video presentation, I discuss step-by-step the derivations for a L-Layer, multi-unit Deep Learning Network, with any activation function g(z)

The implementations of L-Layer, multi-unit Deep Learning Network in vectorized R, Python and Octave are available in my post Deep Learning from first principles in Python, R and Octave – Part 3

2. Elements of Neural Network and Deep Learning – Part 5
This presentation discusses multi-class classification using the Softmax function. The detailed derivation for the Jacobian of the Softmax is discussed, and subsequently the derivative of cross-entropy loss is also discussed in detail. Finally the final set of equations for a Neural Network with multi-class classification is derived.

The corresponding implementations in vectorized R, Python and Octave are available in the following posts
a. Deep Learning from first principles in Python, R and Octave – Part 4
b. Deep Learning from first principles in Python, R and Octave – Part 5

To be continued. Watch this space!

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

To see all posts click Index of Posts

# My presentations on ‘Elements of Neural Networks & Deep Learning’ -Part1,2,3

I will be uploading a series of presentations on ‘Elements of Neural Networks and Deep Learning’. In these video presentations I discuss the derivations of L -Layer Deep Learning Networks, starting from the basics. The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

1. Elements of Neural Networks and Deep Learning – Part 1
This presentation introduces Neural Networks and Deep Learning. A look at history of Neural Networks, Perceptrons and why Deep Learning networks are required and concluding with a simple toy examples of a Neural Network and how they compute

2. Elements of Neural Networks and Deep Learning – Part 2
This presentation takes logistic regression as an example and creates an equivalent 2 layer Neural network. The presentation also takes a look at forward & backward propagation and how the cost is minimized using gradient descent

The implementation of the discussed 2 layer Neural Network in vectorized R, Python and Octave are available in my post ‘Deep Learning from first principles in Python, R and Octave – Part 1

3. Elements of Neural Networks and Deep Learning – Part 3
This 3rd part, discusses a primitive neural network with an input layer, output layer and a hidden layer. The neural network uses tanh activation in the hidden layer and a sigmoid activation in the output layer. The equations for forward and backward propagation are derived.

To see the implementations for the above discussed video see my post ‘Deep Learning from first principles in Python, R and Octave – Part 2

Important note: Do check out my later version of these videos at Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8 . These have more content and also include some corrections. Check it out!

To be continued. Watch this space!

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

To see all posts click Index of posts

# My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon

Are you wondering whether to get into the ‘R’ bus or ‘Python’ bus?
My suggestion is to you is “Why not get into the ‘R and Python’ train?”

The third edition of my book ‘Practical Machine Learning with R and Python – Machine Learning in stereo’ is now available in both paperback ($12.99) and kindle ($8.99/Rs449) versions.  In the third edition all code sections have been re-formatted to use the fixed width font ‘Consolas’. This neatly organizes output which have columns like confusion matrix, dataframes etc to be columnar, making the code more readable.  There is a science to formatting too!! which improves the look and feel. It is little wonder that Steve Jobs had a keen passion for calligraphy! Additionally some typos have been fixed.

In this book I implement some of the most common, but important Machine Learning algorithms in R and equivalent Python code.
1. Practical machine with R and Python: Third Edition – Machine Learning in Stereo(Paperback-$12.99) 2. Practical machine with R and Python Third Edition – Machine Learning in Stereo(Kindle-$8.99/Rs449)

This book is ideal both for beginners and the experts in R and/or Python. Those starting their journey into datascience and ML will find the first 3 chapters useful, as they touch upon the most important programming constructs in R and Python and also deal with equivalent statements in R and Python. Those who are expert in either of the languages, R or Python, will find the equivalent code ideal for brushing up on the other language. And finally,those who are proficient in both languages, can use the R and Python implementations to internalize the ML algorithms better.

Here is a look at the topics covered

Preface …………………………………………………………………………….4
Introduction ………………………………………………………………………6
1. Essential R ………………………………………………………………… 8
2. Essential Python for Datascience ……………………………………………57
3. R vs Python …………………………………………………………………81
4. Regression of a continuous variable ……………………………………….101
5. Classification and Cross Validation ………………………………………..121
6. Regression techniques and regularization ………………………………….146
7. SVMs, Decision Trees and Validation curves ………………………………191
8. Splines, GAMs, Random Forests and Boosting ……………………………222
9. PCA, K-Means and Hierarchical Clustering ………………………………258
References ……………………………………………………………………..269

Hope you have a great time learning as I did while implementing these algorithms!

# My book ‘Deep Learning from first principles:Second Edition’ now on Amazon

The second edition of my book ‘Deep Learning from first principles:Second Edition- In vectorized Python, R and Octave’, is now available on Amazon, in both paperback ($18.99) and kindle ($9.99/Rs449/-)  versions. Since this book is almost 70% code, all functions, and code snippets have been formatted to use the fixed-width font ‘Lucida Console’. In addition line numbers have been added to all code snippets. This makes the code more organized and much more readable. I have also fixed typos in the book

The book includes the following chapters

Table of Contents
Preface 4
Introduction 6
1. Logistic Regression as a Neural Network 8
2. Implementing a simple Neural Network 23
3. Building a L- Layer Deep Learning Network 48
4. Deep Learning network with the Softmax 85
5. MNIST classification with Softmax 103
6. Initialization, regularization in Deep Learning 121
7. Gradient Descent Optimization techniques 167
8. Gradient Check in Deep Learning 197
1. Appendix A 214
2. Appendix 1 – Logistic Regression as a Neural Network 220
3. Appendix 2 - Implementing a simple Neural Network 227
4. Appendix 3 - Building a L- Layer Deep Learning Network 240
5. Appendix 4 - Deep Learning network with the Softmax 259
6. Appendix 5 - MNIST classification with Softmax 269
7. Appendix 6 - Initialization, regularization in Deep Learning 302
8. Appendix 7 - Gradient Descent Optimization techniques 344
9. Appendix 8 – Gradient Check 405
References 475

To see posts click Index of Posts

# My book ‘Practical Machine Learning in R and Python: Second edition’ on Amazon

Note: The 3rd edition of this book is now available My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon

The third edition of my book ‘Practical Machine Learning with R and Python – Machine Learning in stereo’ is now available in both paperback ($12.99) and kindle ($9.99/Rs449) versions.  This second edition includes more content,  extensive comments and formatting for better readability.

In this book I implement some of the most common, but important Machine Learning algorithms in R and equivalent Python code.
1. Practical machine with R and Python: Third Edition – Machine Learning in Stereo(Paperback-$12.99) 2. Practical machine with R and Third Edition – Machine Learning in Stereo(Kindle-$9.99/Rs449)

This book is ideal both for beginners and the experts in R and/or Python. Those starting their journey into datascience and ML will find the first 3 chapters useful, as they touch upon the most important programming constructs in R and Python and also deal with equivalent statements in R and Python. Those who are expert in either of the languages, R or Python, will find the equivalent code ideal for brushing up on the other language. And finally,those who are proficient in both languages, can use the R and Python implementations to internalize the ML algorithms better.

Here is a look at the topics covered

Preface …………………………………………………………………………….4
Introduction ………………………………………………………………………6
1. Essential R ………………………………………………………………… 8
2. Essential Python for Datascience ……………………………………………57
3. R vs Python …………………………………………………………………81
4. Regression of a continuous variable ……………………………………….101
5. Classification and Cross Validation ………………………………………..121
6. Regression techniques and regularization ………………………………….146
7. SVMs, Decision Trees and Validation curves ………………………………191
8. Splines, GAMs, Random Forests and Boosting ……………………………222
9. PCA, K-Means and Hierarchical Clustering ………………………………258
References ……………………………………………………………………..269