It is carnival time again as IPL 2023 is underway!! The new GooglyPlusPlus now includes AI/ML models for computing ball-by-ball Win Probability of matches and each individual player’s Win Probability Contribution (WPC). GooglyPlusPlus uses 2 ML models
Deep Learning (Tensorflow) – accuracy : 0.8584
Logistic Regression (glmnet-tidymodels) : 0.728
Besides, as before, GooglyPlusPlus will also include the usual near real-time analytics with the Shiny app being automatically updated with the previous day’s match data.
Note: The Win Probability Computation can also be done on a live feed of streaming data. Since, I don’t have access to live feeds, the app will show how Win Probability changed during the course of completed matches. For more details on Win Probability and Win Probability Contribution see my posts
GooglyPlusPlus has been also updated with all the latest T20 league’s match data. It includes data from BBL 2022, NTB 2022, CPL 2022, PSL 2023, ICC T20 2022 and now IPL 2023.
GooglyPlusPlus has the following functionality
Batsman tab: For detailed analysis of batsmen
Bowler tab: For detailed analysis of bowlers
Match tab: Analysis of individual matches, plot of Runs vs SR, Wickets vs ER in power play, middle and death overs, Win Probability Analysis of teams and Win Probability Contribution of players
Head-to-head tab: Detailed analysis of team-vs-team batting/bowling scorecard, batting, bowling performances, performances in power play, middle and death overs
Team performance tab: Analysis of team-vs-all other teams with batting /bowling scorecard, batting, bowling performances, performances in power play, middle and death overs
Optimisation tab: Allows one to pit batsmen vs bowlers and vice-versa. This tab also uses integer programming to optimise batting and bowling lineup
Batting analysis tab: Ranks batsmen using Runs or SR. Also plots performances of batsmen in power play, middle and death overs and plots them in a 4×4 grid
Bowling analysis tab: Ranks bowlers based on Wickets or ER. Also plots performances of bowlers in power play, middle and death overs and plots them in a 4×4 grid
Also note all these tabs and features are available for all T20 formats namely IPL, Intl. T20 (men, women), BBL, NTB, PSL, CPL, SSM.
Important note: It is possible, that at times, the Win Probability (Deep Learning) for some recent IPL matches will give an error. This is because I need to rebuild the models on a daily basis as the matches use player embeddings and there are new players. While I will definitely rebuild the models on weekends and whenever I find time, you may have to bear with this error occasionally.
Note: All charts are interactive, which means that you can hover, zoom-in, zoom-out, pan etc on the charts
The latest avatar of GooglyPlusPlus2023 is based on my R package yorkr with data from Cricsheet.
Follow me on twitter for daily highlights @tvganesh_85
GooglyPlusPlus can analyse players, matches, teams, rank, compute win probability and much more.
Included below are some random analyses of IPL 2023 matches so far
A) Chennai Super Kings vs Gujarat Titans – 31 Mar 2023
GT won by 5 wickets ( 4 balls remaining)
a) Worm Wicket Chart
b) Ball-by-ball Win Probability (Logistic Regression) (side-by-side)
This model shows that CSK had the upper hand in the 2nd last over, before it changed to GT. More details on Win Probability and Win Probability Contribution in the posts given by the links above.
c) b) Ball-by-ball Win Probability (Logistic Regression) (overlapping)
Here the ball-by-ball win probability is overlapped. CSK and GT both had nearly the same probability of winning in the 2nd last over before GT edges CSK out
B) Punjab Kings vs Rajasthan Royals – 05 Apr 2023
This was a another closely fought match. PBKS won by 5 runs
a) Worm wicket chart
b) Batting partnerships
Shikhar Dhawan scored 86 runs
c) Ball-by-ball Win Probability using Deep Learning (overlapping)
PBKS was generally ahead in the win probability race
d) Batsman Win Probability Contribution
This plot shows how the different batsmen contributed to the Win Probability. We can see that Shikhar Dhawan has a highest win probability. He played a very sensible innings. Also it appears that there is no difference between Prabhsimran Singh and others, though he score 60 runs. This computation is based on when they come to bat and how the win probability changes when they get dismissed, as seen in the 2nd chart
C) Delhi Capitals vs Gujarat Titans – 4 Apr 2023
GT won by 6 wickets (11 balls remaining)
a) Worm wicket chart
b) Runs scored across 20 overs
c) Runs vs SR plot
d) Batting scorecard (Gujarat Titans)
e) Batsman Win Probability Contribution (Gujarat Titans)
Miller has a higher percentage in the Win Contribution than Sai Sudershan who held the innings together.Strange are the ways of the ML models!!
D) Sunrisers Hyderabad vs Lucknow Supergiants ( 7 Apr 2023)
LSG won by 5 wickets (24 balls left). SRH were bamboozled by the pitch while LSG was able to cruise along
a) Worm wicket chart
b) Wickets vs ER plot
c) Wickets across 20 overs
d) Ball-by-ball win probability using Deep Learning (overlapping)
e) Bowler Win Probability Contribution (LSG)
Bishnoi has a higher win probability contribution than Krunal, though he just took 1 wicket to Krunal’s 3 wickets. This is based on how the Win Probability changed at that point in the game.
The above set of plots are just a random sample.
Note: There are 8 tabs each for 9 T20 leagues (BBL, CPL, T20 (men), T20 (women), IPL, PSL, NTB, SSM, WBB). So there are a lot more detailed charts/analses.
In this post, I compute each batsman’s or bowler’s Win Probability Contribution (WPC) in a T20 match. This metric captures by how much the player (batsman or bowler) changed/impacted the Win Probability of the T20 match. For this computation I use my machine learning models, I had created earlier, which predicts the ball-by-ball win probability as the T20 match progresses through the 2 innings of the match.
In the picture snippet below, you can see how the win probability changes ball-by-ball for each batsman for a T20 match between CSK vs LSG- 31 Mar 2022
In my previous posts I had created several Machine Learning models. In order to compute the player’s Win Probability contribution in this post, I have used the following ML models
The batsman’s or bowler’s win probability contribution changes ball-by=ball. The player’s contribution is calculated as the difference in win probability when the batsman faces the 1st ball in his innings and the last ball either when is out or the innings comes to an end. If the difference is +ve the the player has had a positive impact, and likewise for negative contribution. Similarly, for a bowler, it is the win probability when he/she comes into bowl till, the last delivery he/she bowls
Note: The Win Probability Contribution does not have any relation to the how much runs or at what strike rate the batsman scored the runs. Rather the model computes different win probability for each player, based on his/her embedding, the ball in the innings and six other feature vectors like runs, run rate, runsMomentum etc. These values change for every ball as seen in the table above. Also, this is not continuous. The 2 ML models determine the Win Probability for a specific player, ball and the context in the match.
This metric is similar to Win Probability Added (WPA) used in Sabermetrics for baseball. Here is the definition of WPA from Fangraphs “Win Probability Added (WPA) captures the change in Win Expectancy from one plate appearance to the next and credits or debits the player based on how much their action increased their team’s odds of winning.” This article in Fangraphs explains in detail how this computation is done.
In this post I have added 4 new function to my R package yorkr.
batsmanWinProbLR – batsman’s win probability contribution based on glmnet (Logistic Regression)
bowlerWinProbLR – bowler’s win probability contribution based on glmnet (Logistic Regression)
batsmanWinProbDL – batsman’s win probability contribution based on Deep Learning Model
bowlerWinProbDL – bowlerWinProbLR – bowler’s win probability contribution based on Deep Learning
Hence there are 4 additional features in GooglyPlusPlus based on the above 4 functions. In addition I have also updated
-winProbLR (overLap) function to include the names of batsman when they come to bat and when they get out or the innings comes to an end, based on Logistic Regression
-winProbDL(overLap) function to include the names of batsman when they come to bat and when they get out based on Deep Learning
Hence there are 6 new features in this version of GooglyPlusPlus.
Note: All these new 6 features are available for all 9 formats of T20 in GooglyPlusPlus namely
a) IPL b) BBL c) NTB d) PSL e) Intl, T20 (men) f) Intl. T20 (women) g) WBB h) CSL i) SSM
Check out the latest version of GooglyPlusPlus at gpp2023-2
Note: The data for GooglyPlusPlus comes from Cricsheet and the Shiny app is based on my R package yorkr
A) Chennai SuperKings vs Delhi Capitals – 04 Oct 2021
To understand Win Probability Contribution better let us look at Chennai Super Kings vs Delhi Capitals match on 04 Oct 2021
This was closely fought match with fortunes swinging wildly. If we take a look at the Worm wicket chart of this match
a) Worm Wicket chart – CSK vs DC – 04 Oct 2021
Delhi Capitals finally win the match
b) Win Probability Logistic Regression (side-by-side) – CSK vs DC – 4 Oct 2021
Plotting how win probability changes over the course of the match using Logistic Regression Model
In this match Delhi Capitals won. The batting scorecard of Delhi Capitals
c) Batting Scorecard of Delhi Capitals – CSK vs DC – 4 Oct 2021
d) Win Probability Logistic Regression (Overlapping) – CSK vs DC – 4 Oct 2021
The Win Probability LR (overlapping) shows the probability function of both teams superimposed over one another. The plot includes when a batsman came into to play and when he got out. This is for both teams. This looks a little noisy, but there is a way to selectively display the change in Win Probability for each team. This can be done , by clicking the 3 arrows (orange or blue) from top to bottom. First double-click the team CSK or DC, then click the next 2 items (blue,red or black,grey) Sorry the legends don’t match the colors! 😦
Below we can see how the win probability changed for Delhi Capitals during their innings, as batsmen came into to play. See below
e)Batsman Win Probability contribution:DC – CSK vs DC – 4 Oct 2021
Computing the individual batsman’s Win Contribution and plotting we have. Hetmeyer has a higher Win Probability contribution than Shikhar Dhawan depsite scoring fewer runs
f) Bowler’s Win Probability contribution :CSK – CSK vs DC – 4 Oct 2021
We can also check the Win Probability of the bowlers. So for e.g the CSK bowlers and which bowlers had the most impact. Moeen Ali has the least impact in this match
B) Intl. T20 (men) Australia vs India – 25 Sep 2022
a) Worm wicket chart – Australia vs India – 25 Sep 2022
This was another close match in which India won with the penultimate ball
b) Win Probability based on Deep Learning model (side-by-side) –Australia vs India – 25 Sep 2022
c) Win Probability based on Deep Learning model (overlapping) –Australia vs India – 25 Sep 2022
The plot below shows how the Win Probability of the teams varied across the 20 overs. The 2 Win Probability distributions are superimposed over each other
d) Batsman Win Probability Contribution : India – Australia vs India – 25 Sep 2022
Selectively choosing the India Win Probability plot by double-clicking legend ‘India’ on the right , followed by single click of black, grey legend we have
We see that Kohli, Suryakumar Yadav have good contribution to the Win Probability
e) Plotting the Runs vs Strike Rate:India – Australia vs India – 25 Sep 2022
f) Batsman’s Win Probability Contribution-Australia vs India – 25 Sep 2022
Finally plotting the Batsman’s Win Probability Contribution
Interestingly, Kohli has a greater Win Probability Contribution than SKY, though SKY scored more runs at a better strike rate. As mentioned above, the Win Probability is context dependent and also depends on past performances of the player (batsman, bowler)
Finally let us look at
C) India vs England Intll T20 Women (11 July 2021)
a) Worm wicket chart – India vs England Intl. T20 Women (11 July 2021)
India won this T20 match by 8 runs
b) Win Probability using the Logistic Regression Model –India vs England Intl. T20 Women (11 July 2021)
c) Win Probability with the DL model –India vs England Intl. T20 Women (11 July 2021)
d) Bowler Win Probability Contribution with the LR model–India vs England Intl. T20 Women (11 July 2021)
e) Bowler Win Contribution with the DL model–India vs England Intl. T20 Women (11 July 2021)
Go ahead and try out the latest version of GooglyPlusPlus
This should be my last post on computing T20 Win Probability. In this post I compute Win Probability using Augmented Data with the help of Conditional Tabular Generative Adversarial Networks (CTGANs).
A.Introduction
I started the computation of T20 match Win Probability in my earlier post
This was lightweight and could be easily deployed in my Shiny GooglyPlusPlus app as opposed to the Tidymodel’s Random Forest, which was bulky and slow.
d) Finally I decided to try and improve the accuracy of my Deep Learning Model using Synthetic data. Towards this end, my explorations led me to Conditional Tabular Generative Adversarial Networks (CTGANs). CTGAN are GAN networks that can be used with Tabular data as GAN models are not useful with tabular data. However, the best performance I got for
DL Keras Model + Synthetic data : accuracy =0.77
The poorer accuracy was because CTGAN requires enormous computing power (GPUs) and RAM. The free version of Colab, Kaggle kept crashing when I tried with even 0.1 % of my 1.2 million dataset size. Finally, I tried with just 0.05% and was able to generate synthetic data. Most likely, it is the small sample size and the smaller number of epochs could be the reason for the poor result. In any case, it was worth trying and this approach would possibly work with sufficient computing resources.
B.Generative Adversarial Networks (GANs)
Generative Adversarial Networks (GANs) was the brain child of Ian Goodfellow who demonstrated it in 2014. GANs are capable of generating synthetic text, tables, images, videos using available data. In Adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a sample is from the model distribution or the data distribution.
GANs have 2 Deep Neural Networks , the Generator and Discriminator which compete against other
The Generator (Counterfeiter) takes random noise as input and generates fake images, tables, text. The generator learns to generate plausible data. The generated instances become negative training examples for the discriminator.
The Discriminator (Police) which tries to distinguish between the real and fake images, text. The discriminator learns to distinguish the generator’s fake data from real data. The discriminator penalises the generator for producing implausible results.
A pictorial representation of the GAN model can be shown below
Theoretically best performance of GANs are supposed to happen when the network reaches the ‘Nash equilibrium‘, i.e. when the Generator produces near fake images and the Discriminator’s loss is f ~0.5 i.e. the discriminator is unable to distinguish between real and fake images.
Note: Though I have mentioned T20 data in the above GAN model, the T20 tabular data is actually used in CTGAN which is slightly different from the above. See Reference 2) below.
C. Conditional Tabular Generative Adversial Networks (CTGANs)
“Modeling the probability distribution of rows in tabular data and generating realistic synthetic data is a non-trivial task. Tabular data usually contains a mix of discrete and continuous columns. Continuous columns may have multiple modes whereas discrete columns are sometimes imbalanced making the modeling difficult.” CTGANs handle these challenges.
I came upon CTGAN after spending some time exploring GANs via blogs, videos etc. For building the model I use real T20 match data. However, CTGAN requires immense raw computing power and a lot of RAM. My initial attempts on Colab, my Mac (12 core, 32GB RAM), took forever before eventually crashing, I switched to Kaggle and used GPUs. Still I was only able to use only a miniscule part of my T20 dataset. My match data has 1.2 million rows, hoanything > 0.05% resulted in Kaggle crashing. Since I was able to use only a fraction, I executed the CTGAN model over several iterations, each iteration with a random 0.05% sample of the dataset. At the end of each iterations I also generate synthetic dataset. Over 12 iterations, I generate close 360K of ‘synthetic‘ T20 match data.
I then augment the 1.2 million rows of ‘real‘ T20 match data with the generated ‘synthetic T20 match data to run my Deep Learning model
Here the quality of the synthetic data set is evaluated.
a) Statistical evaluation
Read the real T20 match data
Read the generated T20 synthetic match data
import pandas as pd
# Read the T20 match and synthetic match data
df = pd.read_csv('/kaggle/input/cricket1/t20.csv'). #1.2 million rows
synthetic=pd.read_csv('/kaggle/input/synthetic/synthetic.csv') #300K
# Randomly sample 1000 rows, and generate stats
df1=df.sample(n=1000)
real=df1.describe()
realData_stats=real.transpose
print(realData_stats)
synthetic1=synthetic.sample(n=1000)
synthetic=synthetic1.describe()
syntheticData_stats=synthetic.transpose
syntheticData_stats
import pandas as pd
# CTGAN prints out a new line for each epoch
epochs_output = str(output).split('\n')
# CTGAN separates the values with commas
raw_values = [line.split(',') for line in epochs_output]
loss_values = pd.DataFrame(raw_values)[:-1] # convert to df and delete last row (empty)
# Rename columns
loss_values.columns = ['Epoch', 'Generator Loss', 'Discriminator Loss']
# Extract the numbers from each column
loss_values['Epoch'] = loss_values['Epoch'].str.extract('(\d+)').astype(int)
loss_values['Generator Loss'] = loss_values['Generator Loss'].str.extract('([-+]?\d*\.\d+|\d+)').astype(float)
loss_values['Discriminator Loss'] = loss_values['Discriminator Loss'].str.extract('([-+]?\d*\.\d+|\d+)').astype(float)
# the result is a row for each epoch that contains the generator and discriminator loss
loss_values.head()
import plotly.graph_objects as go
# Plot loss function
fig = go.Figure(data=[go.Scatter(x=loss_values['Epoch'], y=loss_values['Generator Loss'], name='Generator Loss'),
go.Scatter(x=loss_values['Epoch'], y=loss_values['Discriminator Loss'], name='Discriminator Loss')])
# Update the layout for best viewing
fig.update_layout(template='plotly_white',
legend_orientation="h",
legend=dict(x=0, y=1.1))
title = 'CTGAN loss function for T20 dataset - '
fig.update_layout(title=title, xaxis_title='Epoch', yaxis_title='Loss')
fig.show()
G. Qualitative evaluation of Synthetic data
a) Quality of continuous columns in synthetic data
KSComplement -This metric computes the similarity of a real column vs. a synthetic column in terms of the column shapes.The KSComplement uses the Kolmogorov-Smirnov statistic. Closer to 1.0 is good and 0 is worst
The performance is decent but not excellent. I was unable to execute more epochs as it it required larger than the memory allowed
c) Correlation similarity
This metric measures the correlation between a pair of numerical columns and computes the similarity between the real and synthetic data – it compares the trends of 2D distributions. Best 1.0 and 0.0 is worst
In this final part I augment my T20 match data set with the generated synthetic T20 data set.
import pandas as pd
from numpy import savetxt
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np
from keras.layers import Input, Embedding, Flatten, Dense, Reshape, Concatenate, Dropout
from keras.models import Model
import matplotlib.pyplot as plt
# Read real and synthetic data
df = pd.read_csv('/kaggle/input/cricket1/t20.csv')
synthetic=pd.read_csv('/kaggle/input/synthetic/synthetic.csv')
# Augment the data. Concatenate real & synthetic data
df1=pd.concat([df,synthetic])
# Create training and test samples
print("Shape of dataframe=",df1.shape)
train_dataset = df1.sample(frac=0.8,random_state=0)
test_dataset = df1.drop(train_dataset.index)
train_dataset1 = train_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_dataset1 = test_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
train_dataset1
train_labels = train_dataset.pop('isWinner')
test_labels = test_dataset.pop('isWinner')
print(train_dataset1.shape)
a=train_dataset1.describe()
stats=a.transpose
print(a)
As can be seen the accuracy with augmented dataset is around 0.77, while without it I was getting 0.867 with just the real data. This degradation is probably due to the folllowing reasons
Only a fraction of the dataset was used for training. This was not representative of the data distribution for CTGAN to correctly synthesise data
The number of epochs had to be kept low to prevent Kaggle/Colab from crashing
I. Conclusion
This post shows how we can generate synthetic T20 match data to augment real T20 match data. Assuming we have sufficient processing power we should be able to generate synthetic data for augmenting our data set. This should improve the accuracy of the Win Probabily Deep Learning model.
In my last post ‘GooglyPlusPlus now with Win Probability Analysis for all T20 matches‘ I had discussed the performance of my ML models, created with and without player embeddings, in computing the Win Probability of T20 matches. With batsman & bowler embeddings I got much better performance than without the embeddings
glmnet – Accuracy – 0.73
Random Forest (RF) – Accuracy – 0.92
While the Random Forest gave excellent accuracy, it was bulky and also took an unusually long time to predict the Win Probability of a single T20 match. The above 2 ML models were built using R’s Tidymodels. glmnet was fast, but I wanted to see if I could create a ML model that was better, lighter and faster. I had initially tried to use Tensorflow, Keras in Python but then abandoned it, since I did not know how to port the Deep Learning model to R and use in my app GooglyPlusPlus.
But later, since I was stuck with a bulky Random Forest model, I decided to again explore options for saving the Keras Deep Learning model and loading it in R. I found out that saving the model as .h5, we can load it in R and use it for predictions. Hence, I rebuilt a Deep Learning model using Keras, Python with player embeddings and I got excellent performance. The DL model was light and had an accuracy 0.8639 with an ROC_AUC of 0.964 which was great!
GooglyPlusPlus uses data from Cricsheet and is based on my R package yorkr
You can try out this latest version of GooglyPlusPlus at gpp2023-1
Here are the steps
A. Build a Keras Deep Learning model
a. Import necessary packages
import pandas as pd
import numpy as np
from zipfile import ZipFile
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from pathlib import Path
import matplotlib.pyplot as plt
b, Upload the data of all 9 T20 leagues (BBL, CPL, IPL, T20 (men) , T20(women), NTB, CPL, SSM, WBB)
# Read all T20 leagues
df1=pd.read_csv('t20.csv')
print("Shape of dataframe=",df1.shape)
# Create training and test data set
train_dataset = df1.sample(frac=0.8,random_state=0)
test_dataset = df1.drop(train_dataset.index)
train_dataset1 = train_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_dataset1 = test_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
train_dataset1
# Set the target data
train_labels = train_dataset.pop('isWinner')
test_labels = test_dataset.pop('isWinner')
train_dataset1
a=train_dataset1.describe()
stats=a.transpose
a
c. Create a Deep Learning ML model using batsman & bowler embeddings
This was a huge success for me to be able to create the Deep Learning model in Python and use it in my Shiny app GooglyPlusPlus. The Deep Learning Keras model is light-weight and extremely fast.
The Deep Learning model has now been integrated into GooglyPlusPlus. Now you can check the Win Probability using both a) glmnet (Logistic Regression with lasso regularisation) b) Keras Deep Learning model with dropouts as regularisation
In addition I have created 2 features based on Win Probability (WP)
i) Win Probability (Side-by-side– Plot(interactive) : With this functionality the 1st and 2nd innings will be side-by-side. When the 1st innings is played by team 1, the Win Probability of team 2 = 100 – WP (team1). Similarly, when the 2nd innings is being played by team 2, the Win Probability of team1 = 100 – WP (team 2)
ii) Win Probability (Overlapping) – Plot (static): With this functionality the Win Probabilities of both team1(1st innings) & team 2 (2nd innings) are displayed overlapping, so that we can see how the probabilities vary ball-by-ball.
Note: Since the same UI is used for all match functions I had to re-use the Plot(interactive) and Plot(static) radio buttons for Win Probability (Side-by-side) and Win Probability(Overlapping) respectively
Here are screenshots using both ML models with both functionality for some random matches
B) ICC T20 Men World Cup – Netherland-South Africa- 2022-11-06
i) Match Worm wicket chart
ii) Win Probability with LR (Side-by-Side- Plot(interactive))
iii) Win Probability LR (Overlapping- Plot(static))
iv) Win Probability Deep Learning (Side-by-side – Plot(interactive)
In the 213th ball of the innings South Africa was slightly ahead of Netherlands. After that they crashed and burned!
v) Win Probability Deep Learning (Overlapping – Plot (static)
It can be seen that in the 94th ball of both innings South Africa was ahead of Netherlands before the eventual slump.
C) Intl. T20 (Women) India – New Zealand – 2020 – 02 – 27
Here is an interesting match between India and New Zealand T20 Women’s teams. NZ successfully chased the India’s total in a wildly swinging fortunes. See the charts below
i) Match Worm Wicket chart
ii) Win Probability with LR (Side-by-side – Plot (interactive)
iii) Win Probability with LR (Overlapping – Plot (static)
iv) Win Probability with DL model (Side-by-side – Plot (interactive))
v) Win Probability with DL model (Overlapping – Plot (static))
The above functionality in plotting the Win Probability using LR or DL with both options (Side-by-side or Overlapping) is available for all 9 T20 leagues currently supported by GooglyPlusPlus.
There is a school of thought which considers that total runs scored and strike rate for a batsman, or total wickets taken and economy rate for a bowler, do not tell the whole story. This is true to a fair extent. The runs scored or the wickets taken could have been against weaker teams and hence the runs, strike rate or the wickets and economy rate alone do not capture all the performance details of the batsman or bowler. A technique to determine the performance of batsmen against different bowlers and identify the batsman’s possible performance even against bowlers he/she has not yet faced could be done with collaborative filtering. Collaborative filtering, with embeddings can also be used to group players with similar characteristics. Similarly, we could also identify the performance of bowlers versus different batsmen. Hence we need to look at average runs, SR and total wickets, ER with the lens of batsmen, bowlers against similar opposition. This is where collaborative filtering is useful.
The table below shows the performance of all batsman against all bowlers in the table below. The row in the table below is the batsman and the column is the bowler, with the value in the cell is the total Runs scored by the batsman against the bowler in all matches. Note the values are 0 for batsmen who have not yet faced specific bowlers. The table is fairly sparse.
Table A
Similarly, we can compute the performance of all bowlers against all batsmen as in the table below. Here the row is the bowler, the column batsman and the value in the cell is the number of times the bowler got the batsman’s wicket. As before the data is sparsely populated
This problem of computing batsman’s performance against bowlers or vice versa, is identical to the user vs movie rating problem used in collaborative filtering. For e.g we could consider
This above problem depicted could be computed using collaborative filtering with embeddings. We could assign sequential numbers for the batsmen from 1 to M, and for the bowlers from 1 to N. The total runs scored could be represented only for the rows where there are values. One way to solve this problem in Machine Learning is to use One Hot Encoding (OHE), where we assign values for each row and each column and map the values of the table with values of the cell for each combination. But this would take a enormous computation time and memory. The solution to this is use vector embeddings. Here embeddings could be used for capturing the sparse tensors between the batsmen, bowlers, runs scored or vice versa between bowlers against batsmen and the wickets taken. We only need to consider the cells for which values exist. An embedding is a relatively low-dimensional space, into which you can translate high-dimensional vectors. An embedding captures some of the semantics of the input by placing semantically similar inputs close together in the embedding space.
a) To compute bowler performances and identify similarities between bowlers the following embedding in the Deep Learning Network was used
To compute batsmen similarities a similar Deep Learning network for bowler vs batsmen is used
I had earlier created another post Player Performance Estimation using AI Collaborative Filtering for batsman and bowler recommendation, using R package Recommender Lab. However, I was not too happy with the results I got with this R package. When I searched the net for material on using embeddings for collaborative filtering, most of material on the web on movie lens or word2vec are repetitive and have no new material. Finally, this short video lecture from Developer Google on Embeddings provided the most clarity.
I have created 4 Colab notebooks to identify player similarities (recommendations)
a) Batsman similarities IPL
b) Batsman similarities T20
c) Bowler similarities IPL
d) Bowler similarities T20
For creating the model I have used all the data for T20 and IPL from so that I get the best results. The data is from Cricsheet. I have also used Google’s Embeddings Projector to display batsman and bowler embedding to and to group similar players
All the Colab notebooks and the data associated with the code are available in Github. Feel free to download and execute them. See if you get better performance. I tried a wide variety of hyperparameters – learning rate, width and depth of nodes per layer, number of layers, gradient methods etc.
You can download all the code & data from Github at embeddings
b) Create integer dictionaries for batsmen & bowlers
bowlers = df3["bowler1"].unique().tolist()
bowlers
# Create dictionary of bowler to index
bowlers2index = {x: i for i, x in enumerate(bowlers)}
bowlers2index
#Create dictionary of index tp bowler
index2bowlers = {i: x for i, x in enumerate(bowlers)}
index2bowlers
batsmen = df3["batsman1"].unique().tolist()
batsmen
# Create dictionary of batsman to index
batsmen2index = {x: i for i, x in enumerate(batsmen)}
batsmen2index
# Create dictionary of index to batsman
index2batsmen = {i: x for i, x in enumerate(batsmen)}
index2batsmen
#Map bowler, batsman to respective indices
df3["bowler"] = df3["bowler1"].map(bowlers2index)
df3["batsman"] = df3["batsman1"].map(batsmen2index)
df3
num_bowlers =len(bowlers2index)
num_batsmen = len(batsmen2index)
df3["wicketTaken"] = df3["wicketTaken"].values.astype(np.float32)
df3
# min and max ratings will be used to normalize the ratings later
min_wicketTaken = min(df3["wicketTaken"])
max_wicketTaken = max(df3["wicketTaken"])
print(
"Number of bowlers: {}, Number of batsmen: {}, Min wicketsTaken: {}, Max wicketsTaken: {}".format(
num_bowlers, num_batsmen, min_wicketTaken, max_wicketTaken
)
)
c) Concatenate additional features
df3
df6
df31=pd.concat([df3,df6],axis=1)
df31
d) Create a Tensorflow/Keras deep learning mode. Minimise using Mean Squared Error using Stochastic Gradient Descent. I used ‘dropouts’ to regularise the model to keep validation loss within limits
tf.random.set_seed(4)
vector_size=len(batsmen2index)
df4=df31[['bowler','batsman','wicketTaken','balls','runsConceded','ER']]
df4
train_dataset = df4.sample(frac=0.9,random_state=0)
test_dataset = df4.drop(train_dataset.index)
train_dataset1 = train_dataset[['bowler','batsman','balls','runsConceded','ER']]
test_dataset1 = test_dataset[['bowler','batsman','balls','runsConceded','ER']]
train_stats = train_dataset1.describe()
train_stats = train_stats.transpose()
#print(train_stats)
train_labels = train_dataset.pop('wicketTaken')
test_labels = test_dataset.pop('wicketTaken')
# Create a Deep Learning model with keras
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vector_size,16,input_length=5),
tf.keras.layers.Flatten(),
keras.layers.Dropout(.2),
keras.layers.Dense(16),
keras.layers.Dense(8,activation=tf.nn.relu),
keras.layers.Dense(4,activation=tf.nn.relu),
keras.layers.Dense(1)
])
# Print the model summary
#model.summary()
# Use the Adam optimizer with a learning rate of 0.01
#optimizer=keras.optimizers.Adam(learning_rate=.0009, beta_1=0.5, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
#optimizer=keras.optimizers.RMSprop(learning_rate=0.01, rho=0.2, momentum=0.2, epsilon=1e-07)
#optimizer=keras.optimizers.SGD(learning_rate=.009,momentum=0.1) - Works without dropout
optimizer=keras.optimizers.SGD(learning_rate=.01,momentum=0.1)
model.compile(loss='mean_squared_error',
optimizer=optimizer,
)
# Setup the training parameters
#model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
# Create a model
history=model.fit(
train_dataset1, train_labels,batch_size=32,
epochs=40, validation_data = (test_dataset1,test_labels), verbose=1)
e) Plot losses
f) Predict wickets that will be taken by bowlers against random batsmen
df5= df4[['bowler','batsman','balls','runsConceded','ER']]
test1 = df5.sample(n=10)
test1.shape
for i in range(test1.shape[0]):
print('Bowler :', index2bowlers.get(test1.iloc[i,0]), ", Batsman : ",index2batsmen.get(test1.iloc[i,1]), '- Times wicket Prediction:',model.predict(test1.iloc[[i]]))
1/1 [==============================] - 0s 90ms/step
Bowler : Harbhajan Singh , Batsman : AM Nayar - Times wicket Prediction: [[1.0114906]]
1/1 [==============================] - 0s 18ms/step
Bowler : T Natarajan , Batsman : Arshdeep Singh - Times wicket Prediction: [[0.98656166]]
1/1 [==============================] - 0s 19ms/step
Bowler : KK Ahmed , Batsman : A Mishra - Times wicket Prediction: [[1.0504484]]
1/1 [==============================] - 0s 24ms/step
Bowler : M Muralitharan , Batsman : F du Plessis - Times wicket Prediction: [[1.0941994]]
1/1 [==============================] - 0s 25ms/step
Bowler : SK Warne , Batsman : DR Smith - Times wicket Prediction: [[1.0679393]]
1/1 [==============================] - 0s 28ms/step
Bowler : Mohammad Nabi , Batsman : Ishan Kishan - Times wicket Prediction: [[1.403399]]
1/1 [==============================] - 0s 32ms/step
Bowler : R Bhatia , Batsman : DJ Thornely - Times wicket Prediction: [[0.89399755]]
1/1 [==============================] - 0s 26ms/step
Bowler : SP Narine , Batsman : MC Henriques - Times wicket Prediction: [[1.1997008]]
1/1 [==============================] - 0s 19ms/step
Bowler : AS Rajpoot , Batsman : K Gowtham - Times wicket Prediction: [[0.9911405]]
1/1 [==============================] - 0s 21ms/step
Bowler : K Rabada , Batsman : P Simran Singh - Times wicket Prediction: [[1.0064855]]
g) The embedding can be visualised using Google’s Embedding Projector, which identifies other batsmen who have similar characteristics. Here Cosine Similarity is used for grouping similar batsmen of IPL
The closest neighbor for AB De Villiers in IPL is SK Raina, then Rohit Sharma as seen in the visualisation below
The Tensorboard Pmbeddings projector is also interesting. There are multiple ways the data can be visualised namely UMAP, T-SNE, PCA(included). You could play with it.
As mentioned above the Colab notebooks and data are available at Github embeddings
The ability to identify batsmen & bowlers who would perform similarly against specific bowling attacks coupled with the average runs & strike rate should give a good measure of a player’s performance.
I have been very fascinated by how Convolution Neural Networks have been able to, so efficiently, do image classification and image recognition CNN’s have been very successful in in both these tasks. A good paper that explores the workings of a CNN Visualizing and Understanding Convolutional Networks by Matthew D Zeiler and Rob Fergus. They show how through a reverse process of convolution using a deconvnet.
In their paper they show how by passing the feature map through a deconvnet ,which does the reverse process of the convnet, they can reconstruct what input pattern originally caused a given activation in the feature map
In the paper they say “A deconvnet can be thought of as a convnet model that uses the same components (filtering, pooling) but in reverse, so instead of mapping pixels to features, it does the opposite. An input image is presented to the CNN and features activation computed throughout the layers. To examine a given convnet activation, we set all other activations in the layer to zero and pass the feature maps as input to the attached deconvnet layer. Then we successively (i) unpool, (ii) rectify and (iii) filter to reconstruct the activity in the layer beneath that gave rise to the chosen activation. This is then repeated until input pixel space is reached.”
I started to scout the internet to see how I can implement this reverse process of Convolution to understand what really happens under the hood of a CNN. There are a lot of good articles and blogs, but I found this post Applied Deep Learning – Part 4: Convolutional Neural Networks take the visualization of the CNN one step further.
This post takes VGG16 as the pre-trained network and then uses this network to display the intermediate visualizations. While this post was very informative and also the visualizations of the various images were very clear, I wanted to simplify the problem for my own understanding.
Hence I decided to take the MNIST digit classification as my base problem. I created a simple 3 layer CNN which gives close to 99.1% accuracy and decided to see if I could do the visualization.
As mentioned in the above post, there are 3 major visualisations
Feature activations at the layer
Visualisation of the filters
Visualisation of the class outputs
Feature Activation – This visualization the feature activation at the 3 different layers for a given input image. It can be seen that first layer activates based on the edge of the image. Deeper layers activate in a more abstract way.
Visualization of the filters: This visualization shows what patterns the filters respond maximally to. This is implemented in Keras here
To do this the following is repeated in a loop
Choose a loss function that maximizes the value of a convnet filter activation
Do gradient ascent (maximization) in input space that increases the filter activation
Visualizing Class Outputs of the MNIST Convnet: This process is similar to determining the filter activation. Here the convnet is made to generate an image that represents the category maximally.
mnist=tf.keras.datasets.mnist# Set training and test data and labels(training_images,training_labels),(test_images,test_labels)=mnist.load_data()
In [0]:
#Normalize training dataX=np.array(training_images).reshape(training_images.shape[0],28,28,1)# Normalize the images by dividing by 255.0X=X/255.0X.shape# Use one hot encoding for the labelsY=np_utils.to_categorical(training_labels,10)Y.shape# Split training data into training and validation data in the ratio of 80:20X_train,X_validation,y_train,y_validation=train_test_split(X,Y,test_size=0.20,random_state=42)
In [4]:
# Normalize test dataX_test=np.array(test_images).reshape(test_images.shape[0],28,28,1)X_test=X_test/255.0#Use OHE for the test labelsY_test=np_utils.to_categorical(test_labels,10)X_test.shape
Out[4]:
(10000, 28, 28, 1)
Display data
Display the training data and the corresponding labels
Display the activations at the intermediate layers
This displays the intermediate activations as the image passes through the filters and generates these feature maps
In [13]:
layer_names=['conv2d_4','conv2d_5','conv2d_6']layer_outputs=[layer.outputforlayerinmodel.layersif'conv2d'inlayer.name]activation_model=Model(inputs=model.input,outputs=layer_outputs)intermediate_activations=activation_model.predict(img)images_per_row=8max_images=8forlayer_name,layer_activationinzip(layer_names,intermediate_activations):print(layer_name,layer_activation.shape)n_features=layer_activation.shape[-1]print("features=",n_features)n_features=min(n_features,max_images)print(n_features)size=layer_activation.shape[1]print("size=",size)n_cols=n_features//images_per_rowdisplay_grid=np.zeros((size*n_cols,images_per_row*size))forcolinrange(n_cols):forrowinrange(images_per_row):channel_image=layer_activation[0,:,:,col*images_per_row+row]channel_image-=channel_image.mean()channel_image/=channel_image.std()channel_image*=64channel_image+=128channel_image=np.clip(channel_image,0,255).astype('uint8')display_grid[col*size:(col+1)*size,row*size:(row+1)*size]=channel_imagescale=2./sizeplt.figure(figsize=(scale*display_grid.shape[1],scale*display_grid.shape[0]))plt.axis('off')plt.title(layer_name)plt.grid(False)plt.imshow(display_grid,aspect='auto',cmap='viridis')plt.show()
It can be seen that at the higher layers only abstract features of the input image are captured
# To fix the ImportError: cannot import name 'imresize' in the next cell. Run this cell. Then comment and restart and run all#!pip install scipy==1.1.0
Visualize the pattern that the filters respond to maximally
Choose a loss function that maximizes the value of the CNN filter in a given layer
Start from a blank input image.
Do gradient ascent in input space. Modify input values so that the filter activates more
Repeat this in a loop.
In [14]:
fromvis.visualizationimportvisualize_activation,get_num_filtersfromvis.utilsimportutilsfromvis.input_modifiersimportJittermax_filters=24selected_indices=[]vis_images=[[],[],[],[],[]]i=0selected_filters=[[0,3,11,15,16,17,18,22],[8,21,23,25,31,32,35,41],[2,7,11,14,19,26,35,48]]# Set the layerslayer_name=['conv2d_4','conv2d_5','conv2d_6']# Set the layer indiceslayer_idx=[1,3,5]forlayer_name,layer_idxinzip(layer_name,layer_idx):# Visualize all filters in this layer.ifselected_filters:filters=selected_filters[i]else:# Randomly select filtersfilters=sorted(np.random.permutation(get_num_filters(model.layers[layer_idx]))[:max_filters])selected_indices.append(filters)# Generate input image for each filter.# Loop through the selected filters in each layer and generate the activation of these filtersforidxinfilters:img=visualize_activation(model,layer_idx,filter_indices=idx,tv_weight=0.,input_modifiers=[Jitter(0.05)],max_iter=300)vis_images[i].append(img)# Generate stitched image palette with 4 cols so we get 2 rows.stitched=utils.stitch_images(vis_images[i],cols=4)plt.figure(figsize=(20,30))plt.title(layer_name)plt.axis('off')stitched=stitched.reshape(1,61,127,1)plt.imshow(stitched[0,:,:,0])plt.show()i+=1
fromvis.utilsimportutilsnew_vis_images=[[],[],[],[],[]]i=0layer_name=['conv2d_4','conv2d_5','conv2d_6']layer_idx=[1,3,5]forlayer_name,layer_idxinzip(layer_name,layer_idx):# Generate input image for each filter.forj,idxinenumerate(selected_indices[i]):img=visualize_activation(model,layer_idx,filter_indices=idx,seed_input=vis_images[i][j],input_modifiers=[Jitter(0.05)],max_iter=300)#img = utils.draw_text(img, 'Filter {}'.format(idx)) new_vis_images[i].append(img)stitched=utils.stitch_images(new_vis_images[i],cols=4)plt.figure(figsize=(20,30))plt.title(layer_name)plt.axis('off')print(stitched.shape)stitched=stitched.reshape(1,61,127,1)plt.imshow(stitched[0,:,:,0])plt.show()i+=1
Visualizing Class Outputs
Here the CNN will generate the image that maximally represents the category. Each of the output represents one of the digits as can be seen below
In [16]:
fromvis.utilsimportutilsfromkerasimportactivationscodes='''zero 0one 1two 2three 3four 4five 5six 6seven 7eight 8nine 9'''layer_idx=10initial=[]images=[]tuples=[]# Swap softmax with linear for better visualizationmodel.layers[layer_idx].activation=activations.linearmodel=utils.apply_modifications(model)forlineincodes.split('\n'):ifnotline:continuename,idx=line.rsplit(' ',1)idx=int(idx)img=visualize_activation(model,layer_idx,filter_indices=idx,tv_weight=0.,max_iter=300,input_modifiers=[Jitter(0.05)])initial.append(img)tuples.append((name,idx))i=0forname,idxintuples:img=visualize_activation(model,layer_idx,filter_indices=idx,seed_input=initial[i],max_iter=300,input_modifiers=[Jitter(0.05)])#img = utils.draw_text(img, name) # Unable to display text on gray scale imagei+=1images.append(img)stitched=utils.stitch_images(images,cols=4)plt.figure(figsize=(20,20))plt.axis('off')stitched=stitched.reshape(1,94,127,1)plt.imshow(stitched[0,:,:,0])plt.show()
In the grid below the class outputs show the MNIST digits to which output responds to maximally. We can see the ghostly outline
of digits 0 – 9. We can clearly see the outline if 0,1, 2,3,4,5 (yes, it is there!),6,7, 8 and 9. If you look at this from a little distance the digits are clearly visible. Isn’t that really cool!!
Conclusion:
It is really interesting to see the class outputs which show the image to which the class output responds to maximally. In the
post Applied Deep Learning – Part 4: Convolutional Neural Networks the class output show much more complicated images and is worth a look. It is really interesting to note that the model has adjusted the filter values and the weights of the fully connected network to maximally respond to the MNIST digits
Neural Style Transfer (NST) is a fascinating area of Deep Learning and Convolutional Neural Networks. NST is an interesting technique, in which the style from an image, known as the ‘style image’ is transferred to another image ‘content image’ and we get a third a image which is a generated image which has the content of the original image and the style of another image.
NST can be used to reimagine how famous painters like Van Gogh, Claude Monet or a Picasso would have visualised a scenery or architecture. NST uses Convolutional Neural Networks (CNNs) to achieve this artistic style transfer from one image to another. NST was originally implemented by Gati et al., in their paper Neural Algorithm of Artistic Style. Convolutional Neural Networks have been very successful in image classification image recognition et cetera. CNN networks have also been have also generated very interesting pictures using Neural Style Transfer which will be shown in this post. An interesting aspect of CNN’s is that the first couple of layers in the CNN capture basic features of the image like edges and pixel values. But as we go deeper into the CNN, the network captures higher level features of the input image.
To get started with Neural Style transfer we will be using the VGG19 pre-trained network. The VGG19 CNN is a compact pre-trained your network which can be used for performing the NST. However, we could have also used Resnet or InceptionV3 networks for this purpose but these are very large networks. The idea of using a network trained on a different task and applying it to a new task is called transfer learning.
What needs to be done to transfer the style from one of the image to another image. This brings us to the question – What is ‘style’? What is it that distinguishes Van Gogh’s painting or Picasso’s cubist art. Convolutional Neural Networks capture basic features in the lower layers and much more complex features in the deeper layers. Style can be computed by taking the correlation of the feature maps in a layer L. This is my interpretation of how style is captured. Since style is intrinsic to the image, it implies that the style feature would exist across all the filters in a layer. Hence, to pick up this style we would need to get the correlation of the filters across channels of a lawyer. This is computed mathematically, using the Gram matrix which calculates the correlation of the activation of a the filter by the style image and generated image
To transfer the style from one image to the content image we need to do two parallel operations while doing forward propagation
– Compute the content loss between the source image and the generated image
– Compute the style loss between the style image and the generated image
– Finally we need to compute the total loss
In order to get transfer the style from the ‘style’ image to the ‘content ‘image resulting in a ‘generated’ image the total loss has to be minimised. Therefore backward propagation with gradient descent is done to minimise the total loss comprising of the content and style loss.
Initially we make the Generated Image ‘G’ the same as the source image ‘S’
The content loss at layer ‘l’
where and represent the activations at layer ‘l’ in a filter i, at position ‘j’. The intuition is that the activations will be same for similar source and generated image. We need to minimise the content loss so that the generated stylized image is as close to the original image as possible. An intermediate layer of VGG19 block5_conv2 is used
The Style layers that are are used are
style_layers = [‘block1_conv1’,
‘block2_conv1’,
‘block3_conv1’,
‘block4_conv1’,
‘block5_conv1’]
To compute the Style Loss the Gram matrix needs to be computed. The Gram Matrix is computed by unrolling the filters as shown below (source: Convolutional Neural Networks by Prof Andrew Ng, Coursera). The result is a matrix of size x where is the number of channels
The above diagram shows the filters of height and width with channels
The contribution of layer ‘l’ to style loss is given by
where and are the Gram matrices of the style and generated images respectively. By minimising the distance in the gram matrices of the style and generated image we can ensure that generated image is a stylized version of the original image similar to the style image
The total loss is given by
Back propagation with gradient descent works to minimise the content loss between the source and generated image, while the style loss tries to minimise the discrepancies in the style of the style image and generated image. Running through forward and backpropagation through several epochs successfully transfers the style from the style image to the source image.
Note: The code in this notebook is largely based on the Neural Style Transfer tutorial from Tensorflow, though I may have taken some changes from other blogs. I also made a few changes to the code in this tutorial, like removing the scaling factor, or the class definition (Personally, I belong to the old school (C language) and am not much in love with the ‘self.”..All references are included below
Note: Here is a interesting thought. Could we do a Neural Style Transfer in music? Imagine Carlos Santana playing ‘Hotel California’ or Brian May style in ‘Another brick in the wall’. While our first reaction would be that it may not sound good as we are used to style of these songs, we may be surprised by a possible style transfer. This is definitely music to the ears!
Here are few runs from this
A) Run 1
1. Neural Style Transfer – a) Content Image – My portrait. b) Style Image – Wassily Kadinsky Oil on canvas, 1913, Vassily Kadinsky’s composition
2. Result of Neural Style Transfer
2) Run 2
a) Content Image – Portrait of my parents b) Style Image – Vincent Van Gogh’s ,Starry Night Oil on canvas 1889
2. Result of Neural Style Transfer
Run 3
1. Content Image – Caesar 2 (Masai Mara- 20 Jun 2018). Style Image – The Great Wave at Kanagawa – Katsushika Hokosai, 1826-1833
2. Result of Neural Style Transfer
Run 4
1. Content Image – Junagarh Fort , Rajasthan Sep 2016 b) Style Image – Le Pont Japonais by Claude Monet, Oil on canvas, 1920
2. Result of Neural Style Transfer
Neural Style Transfer is a very ingenious idea which shows that we can segregate the style of a painting and transfer to another image.
Convolutional Neural Networks (CNNs), have been very popular in the last decade or so. CNNs have been used in multiple applications like image recognition, image classification, facial recognition, neural style transfer etc. CNN’s have been extremely successful in handling these kind of problems. How do they work? What makes them so successful? What is the principle behind CNN’s ?
Note: this post is based on two Coursera courses I did, namely namely Deep Learning specialisation by Prof Andrew Ng and Tensorflow Specialisation by Laurence Moroney.
In this post I show you how CNN’s work. To understand how CNNs work, we need to understand the concept behind machine learning algorithms. If you take a simple machine learning algorithm in which you are trying to do multi-class classification using softmax or binary classification with the sigmoid function, for a set of for a set of input features against a target variable we need to create an objective function of the input features versus the target variable. Then we need to minimise this objective function, while performing gradient descent, such that the cost is the lowest. This will give the set of weights for the different variables in the objective function.
The central problem in ML algorithms is to do feature selection, i.e. we need to find the set of features that actually influence the target. There are various methods for doing features selection – best fit, forward fit, backward fit, ridge and lasso regression. All these methods try to pick out the predictors that influence the output most, by making the weights of the other features close to zero. Please look at my post – Practical Machine Learning in R and Python – Part 3, where I show you the different methods for doing features selection.
In image classification or Image recognition we need to find the important features in the image. How do we do that? Many years back, have played around with OpenCV. While working with OpenCV I came across are numerous filters like the Sobel ,the Laplacian, Canny, Gaussian filter et cetera which can be used to identify key features of the image. For example the Canny filter feature can be used for edge detection, Gaussian for smoothing, Sobel for determining the derivative and we have other filters for detecting vertical or horizontal edges. Take a look at my post Computer Vision: Ramblings on derivatives, histograms and contours So for handling images we need to apply these filters to pick out the key features of the image namely the edges and other features. So rather than using the entire image’s pixels against the target class we can pick out the features from the image and use that as predictors of the target output.
Note: that in Convolutional Neural Network, fixed filter values like the those shown above are not used directly. Rather the filter values are learned through back propagation and gradient descent as shown below.
In CNNs the filter values are considered to be weights which are then learned and updated in each forward/backward propagation cycle much like the way a fully connected Deep Learning Network learns the weights of the network.
Here is a short derivation of the most important parts of how a CNNs work
The convolution of a filter F with the input X can be represented as.
Convolving we get
This the forward propagation as it passes through a non-linear function like Relu
To go through back propagation we need to compute the at every node of Convolutional Neural network
The loss with respect to the output is . & are the local derivatives
We need these local derivatives because we can learn the filter values using gradient descent
where is the learning rate. Also is the loss which is back propagated to the previous layers. You can see the detailed derivation of back propagation in my post Deep Learning from first principles in Python, R and Octave – Part 3 in a L-layer, multi-unit Deep Learning network.
In the fully connected layers the weights associated with each connection is computed in every cycle of forward and backward propagation using gradient descent. Similarly, the filter values are also computed and updated in each forward and backward propagation cycle. This is done so as to minimize the loss at the output layer.
By using the chain rule and simplifying the back propagation for the Convolutional layers we get these 2 equations. The first equation is used to learn the filter values and the second is used pass the loss to layers before
An important aspect of performing convolutions is to reduce the size of the flattened image that is passed into the fully connected DL network. Successively convolving with 2D filters and doing a max pooling helps to reduce the size of the features that we can use for learning the images. Convolutions also enable a sparsity of connections as you can see in the diagram below. In the LeNet-5 Convolution Neural Network of Yann Le Cunn, successive convolutions reduce the image size from 28 x 28=784 to 120 flattened values.
Here is an interesting Deep Learning problem. Convolutions help in picking out important features of images and help in image classification/ detection. What would be its equivalent if we wanted to identify the Carnatic ragam of a song? A Carnatic ragam is roughly similar to Western scales (major, natural, melodic, blues) with all its modes Lydian, Aeolion, Phyrgian etc. Except in the case of the ragams, it is more nuanced, complex and involved. Personally, I can rarely identify a ragam on which a carnatic song is based (I am tone deaf when it comes to identifying ragams). I have come to understand that each Carnatic ragam has its own character, which is made up of several melodic phrases which are unique to that flavor of a ragam. What operation like convolution would be needed so that we can pick out these unique phrases in a Carnatic ragam? Of course, we would need to use it in Recurrent Neural Networks with LSTMs as a song is a time sequence of notes to identify sequences. Nevertheless, if there was some operation with which we can pick up the distinct, unique phrases from a song and then run it through a classifier, maybe we would be able to identify the ragam of the song.
Below I implement 3 simple CNN using the Dogs vs Cats Dataset from Kaggle. The first CNN uses regular Convolutions a Fully connected network to classify the images. The second approach uses Image Augmentation. For some reason, I did not get a better performance with Image Augumentation. Thirdly I use the pre-trained Inception v3 network.
1. Basic Convolutional Neural Network in Tensorflow & Keras
#-----------------------------------------------------------# Retrieve a list of list results on training and test data# sets for each training epoch#-----------------------------------------------------------acc=history.history['accuracy']val_acc=history.history['val_accuracy']loss=history.history['loss']val_loss=history.history['val_loss']epochs=range(len(acc))# Get number of epochs#------------------------------------------------# Plot training and validation accuracy per epoch#------------------------------------------------plt.plot(epochs,acc,label="training accuracy")plt.plot(epochs,val_acc,label='validation acuracy')plt.title('Training and validation accuracy')plt.legend()plt.figure()#------------------------------------------------# Plot training and validation loss per epoch#------------------------------------------------plt.plot(epochs,loss,label="training loss")plt.plot(epochs,val_loss,label="validation loss")plt.title('Training and validation loss')plt.legend()
Including the important parts of this implementation below
Use Image Augumentation
Use Image Augumentation to improve performance
Use the same model parameters as before
Perform the following image augmentation
width, height shift
shear and zoom
Note: Adding rotation made the performance worse
importtensorflowastffromtensorflowimportkerasfromtensorflow.keras.optimizersimportRMSpropfromtensorflow.keras.preprocessing.imageimportImageDataGeneratormodel=tf.keras.models.Sequential([tf.keras.layers.Conv2D(32,(3,3),activation='relu',input_shape=(150,150,3)),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Conv2D(64,(3,3),activation='relu'),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Conv2D(128,(3,3),activation='relu'),tf.keras.layers.MaxPooling2D(2,2),tf.keras.layers.Flatten(),tf.keras.layers.Dense(128,activation='relu'),tf.keras.layers.Dense(512,activation='relu'),tf.keras.layers.Dense(1,activation='sigmoid')])train_datagen=ImageDataGenerator(rescale=1./255,#rotation_range=90,width_shift_range=0.2,height_shift_range=0.2,shear_range=0.2,zoom_range=0.2)#horizontal_flip=True,#fill_mode='nearest')validation_datagen=ImageDataGenerator(rescale=1./255)#train_generator=train_datagen.flow_from_directory(train_dir,batch_size=32,class_mode='binary',target_size=(150,150))# --------------------# Flow validation images in batches of 20 using test_datagen generator# --------------------validation_generator=validation_datagen.flow_from_directory(validation_dir,batch_size=32,class_mode='binary',target_size=(150,150))# Use Adam Optmizer model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
Found 20000 images belonging to 2 classes.
Found 5000 images belonging to 2 classes.
importmatplotlib.pyplotasplt#-----------------------------------------------------------# Retrieve a list of list results on training and test data# sets for each training epoch#-----------------------------------------------------------acc=history.history['accuracy']val_acc=history.history['val_accuracy']loss=history.history['loss']val_loss=history.history['val_loss']epochs=range(len(acc))# Get number of epochs#------------------------------------------------# Plot training and validation accuracy per epoch#------------------------------------------------plt.plot(epochs,acc,label="training accuracy")plt.plot(epochs,val_acc,label='validation acuracy')plt.title('Training and validation accuracy')plt.legend()plt.figure()#------------------------------------------------# Plot training and validation loss per epoch#------------------------------------------------plt.plot(epochs,loss,label="training loss")plt.plot(epochs,val_loss,label="validation loss")plt.title('Training and validation loss')plt.legend()
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/inception_v3/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
87916544/87910968 [==============================] - 1s 0us/step
last layer output shape: (None, 7, 7, 768)
Use Layer 7 of Inception Network
Use Image Augumentation
Use Adam Optimizer
In [0]:
importtensorflowastffromtensorflowimportkerasfromtensorflow.keras.optimizersimportRMSpropfromtensorflow.keras.preprocessing.imageimportImageDataGenerator# Flatten the output layer to 1 dimensionx=layers.Flatten()(last_output)# Add a fully connected layer with 1,024 hidden units and ReLU activationx=layers.Dense(1024,activation='relu')(x)# Add a dropout rate of 0.2x=layers.Dropout(0.2)(x)# Add a final sigmoid layer for classificationx=layers.Dense(1,activation='sigmoid')(x)model=Model(pre_trained_model.input,x)#train_datagen = ImageDataGenerator( rescale = 1.0/255. )#validation_datagen = ImageDataGenerator( rescale = 1.0/255. )train_datagen=ImageDataGenerator(rescale=1./255,#rotation_range=90,width_shift_range=0.2,height_shift_range=0.2,shear_range=0.2,zoom_range=0.2)#horizontal_flip=True,#fill_mode='nearest')validation_datagen=ImageDataGenerator(rescale=1./255)#train_generator=train_datagen.flow_from_directory(train_dir,batch_size=32,class_mode='binary',target_size=(150,150))# --------------------# Flow validation images in batches of 20 using test_datagen generator# --------------------validation_generator=validation_datagen.flow_from_directory(validation_dir,batch_size=32,class_mode='binary',target_size=(150,150))model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
Found 20000 images belonging to 2 classes.
Found 5000 images belonging to 2 classes.
importmatplotlib.pyplotasplt#-----------------------------------------------------------# Retrieve a list of list results on training and test data# sets for each training epoch#-----------------------------------------------------------acc=history.history['accuracy']val_acc=history.history['val_accuracy']loss=history.history['loss']val_loss=history.history['val_loss']epochs=range(len(acc))# Get number of epochs#------------------------------------------------# Plot training and validation accuracy per epoch#------------------------------------------------plt.plot(epochs,acc,label="training accuracy")plt.plot(epochs,val_acc,label='validation acuracy')plt.title('Training and validation accuracy')plt.legend()plt.figure()#------------------------------------------------# Plot training and validation loss per epoch#------------------------------------------------plt.plot(epochs,loss,label="training loss")plt.plot(epochs,val_loss,label="validation loss")plt.title('Training and validation loss')plt.legend()
I intend to do some interesting stuff with Convolutional Neural Networks.
“From this distant vantage point, the Earth might not seem of any particular interest. But for us, it’s different. Consider again that dot. That’s here, that’s home, that’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there—on the mote of dust suspended in a sunbeam.”
Carl Sagan
Tensorflow and Keras are Deep Learning frameworks that really simplify a lot of things to the user. If you are familiar with Machine Learning and Deep Learning concepts then Tensorflow and Keras are really a playground to realize your ideas. In this post I show how you can get started with Tensorflow in both Python and R
Tensorflow in Python
For tensorflow in Python, I found Google’s Colab an ideal environment for running your Deep Learning code. This is an Google’s research project where you can execute your code on GPUs, TPUs etc
Tensorflow in R (RStudio)
To execute tensorflow in R (RStudio) you need to install tensorflow and keras as shown below
In this post I show how to get started with Tensorflow and Keras in R.
# Install Tensorflow in RStudio#install_tensorflow()# Install Keras#install_packages("keras")library(tensorflow)
libary(keras)
This post takes 3 different Machine Learning problems and uses the
Tensorflow/Keras framework to solve it
Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($14.99) and in kindle version($9.99/Rs449).
1. Multivariate regression with Tensorflow – Python
#Get the data rom the UCI Machine Learning repositorydataset=keras.utils.get_file("parkinsons_updrs.data","https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data")
Downloading data from https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data
917504/911261 [==============================] - 0s 0us/step
In [3]:
# Read the CSV file importpandasaspdparkinsons=pd.read_csv(dataset,na_values="?",comment='\t',sep=",",skipinitialspace=True)print(parkinsons.shape)print(parkinsons.columns)#Check if there are any NAs in the rowsparkinsons.isna().sum()
# Create a training and test data set with 80%/20%train_dataset=parkinsons2.sample(frac=0.8,random_state=0)test_dataset=parkinsons2.drop(train_dataset.index)# Select columnstrain_dataset1=train_dataset[['age','test_time','Jitter(%)','Jitter(Abs)','Jitter:RAP','Jitter:PPQ5','Jitter:DDP','Shimmer','Shimmer(dB)','Shimmer:APQ3','Shimmer:APQ5','Shimmer:APQ11','Shimmer:DDA','NHR','HNR','RPDE','DFA','PPE','sex_0','sex_1']]test_dataset1=test_dataset[['age','test_time','Jitter(%)','Jitter(Abs)','Jitter:RAP','Jitter:PPQ5','Jitter:DDP','Shimmer','Shimmer(dB)','Shimmer:APQ3','Shimmer:APQ5','Shimmer:APQ11','Shimmer:DDA','NHR','HNR','RPDE','DFA','PPE','sex_0','sex_1']]
In [7]:
# Generate the statistics of the columns for use in normalization of the datatrain_stats=train_dataset1.describe()train_stats=train_stats.transpose()train_stats
# Create the target variabletrain_labels=train_dataset.pop('motor_UPDRS')test_labels=test_dataset.pop('motor_UPDRS')
In [0]:
# Normalize the data by subtracting the mean and dividing by the standard deviationdefnormalize(x):return(x-train_stats['mean'])/train_stats['std']# Create normalized training and test datanormalized_train_data=normalize(train_dataset1)normalized_test_data=normalize(test_dataset1)
In [0]:
# Create a Deep Learning model with kerasmodel=tf.keras.Sequential([keras.layers.Dense(6,activation=tf.nn.relu,input_shape=[len(train_dataset1.keys())]),keras.layers.Dense(9,activation=tf.nn.relu),keras.layers.Dense(6,activation=tf.nn.relu),keras.layers.Dense(1)])# Use the Adam optimizer with a learning rate of 0.01optimizer=keras.optimizers.Adam(lr=.01,beta_1=0.9,beta_2=0.999,epsilon=None,decay=0.0,amsgrad=False)# Set the metrics required to be Mean Absolute Error and Mean Squared Error.For regression, the loss is mean_squared_errormodel.compile(loss='mean_squared_error',optimizer=optimizer,metrics=['mean_absolute_error','mean_squared_error'])
In [0]:
# Create a modelhistory=model.fit(normalized_train_data,train_labels,epochs=1000,validation_data=(normalized_test_data,test_labels),verbose=0)
It can be seen that the mean absolute error is on an average about +/- 4.0. The validation error also is about the same. This can be reduced by playing around with the hyperparamaters and increasing the number of iterations
1a. Multivariate Regression in Tensorflow – R
# Install Tensorflow in RStudio#install_tensorflow()# Install Keras#install_packages("keras")library(tensorflow)
# Download the Parkinson's data from UCI Machine Learning repository
dataset <- read.csv("https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data")
# Set the column names
names(dataset) <- c("subject","age", "sex", "test_time","motor_UPDRS","total_UPDRS","Jitter","Jitter.Abs",
"Jitter.RAP","Jitter.PPQ5","Jitter.DDP","Shimmer", "Shimmer.dB", "Shimmer.APQ3",
"Shimmer.APQ5","Shimmer.APQ11","Shimmer.DDA", "NHR","HNR", "RPDE", "DFA","PPE")
# Remove the column 'subject' as it is not relevant to analysis
dataset1 <- subset(dataset, select = -c(subject))
# Make the column 'sex' as a factor for using dummies
dataset1$sex=as.factor(dataset1$sex)
# Add dummy variables for categorical cariable 'sex'
dataset2 <- dummy.data.frame(dataset1, sep = ".")
## Split data 80% training and 20% test
sample_size <- floor(0.8 * nrow(dataset3))
## set the seed to make your partition reproducible
set.seed(12)
train_index <- sample(seq_len(nrow(dataset3)), size = sample_size)
train_dataset <- dataset3[train_index, ]
test_dataset <- dataset3[-train_index, ]
train_data <- train_dataset %>% select(sex.0,sex.1,age, test_time,Jitter,Jitter.Abs,Jitter.PPQ5,Jitter.DDP,
Shimmer, Shimmer.dB,Shimmer.APQ3,Shimmer.APQ11,
Shimmer.DDA,NHR,HNR,RPDE,DFA,PPE)
train_labels <- select(train_dataset,motor_UPDRS)
test_data <- test_dataset %>% select(sex.0,sex.1,age, test_time,Jitter,Jitter.Abs,Jitter.PPQ5,Jitter.DDP,
Shimmer, Shimmer.dB,Shimmer.APQ3,Shimmer.APQ11,
Shimmer.DDA,NHR,HNR,RPDE,DFA,PPE)
test_labels <- select(test_dataset,motor_UPDRS)
Normalize the data
# Normalize the data by subtracting the mean and dividing by the standard deviation
normalize<-function(x) {
y<-(x - mean(x)) / sd(x)
return(y)
}
normalized_train_data <-apply(train_data,2,normalize)
# Convert to matrix
train_labels <- as.matrix(train_labels)
normalized_test_data <- apply(test_data,2,normalize)
test_labels <- as.matrix(test_labels)
Create the Deep Learning Model
model <- keras_model_sequential()
model %>%
layer_dense(units = 6, activation = 'relu', input_shape = dim(normalized_train_data)[2]) %>%
layer_dense(units = 9, activation = 'relu') %>%
layer_dense(units = 6, activation = 'relu') %>%
layer_dense(units = 1)
# Set the metrics required to be Mean Absolute Error and Mean Squared Error.For regression, the loss is # mean_squared_error
model %>% compile(
loss = 'mean_squared_error',
optimizer = optimizer_rmsprop(),
metrics = c('mean_absolute_error','mean_squared_error')
)
# Fit the model# Use the test data for validation
history <- model %>% fit(
normalized_train_data, train_labels,
epochs = 30, batch_size = 128,
validation_data = list(normalized_test_data,test_labels)
)
Plot mean squared error, mean absolute error and loss for training data and test data
plot(history)
Fig1
2. Binary classification in Tensorflow – Python
This is a simple binary classification problem from UCI Machine Learning repository and deals with data on Breast cancer from the Univ. of Wisconsin Breast Cancer Wisconsin (Diagnostic) Data Setbold text
In [31]:
importtensorflowastffromtensorflowimportkerasimportpandasaspd# Read the data set from UCI ML sitedataset_path=keras.utils.get_file("breast-cancer-wisconsin.data","https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data")raw_dataset=pd.read_csv(dataset_path,sep=",",na_values="?",skipinitialspace=True,)dataset=raw_dataset.copy()#Check for Null and dropdataset.isna().sum()dataset=dataset.dropna()dataset.isna().sum()# Set the column namesdataset.columns=["id","thickness","cellsize","cellshape","adhesion","epicellsize","barenuclei","chromatin","normalnucleoli","mitoses","class"]dataset.head()
# Create a training/test set in the ratio 80/20train_dataset=dataset.sample(frac=0.8,random_state=0)test_dataset=dataset.drop(train_dataset.index)# Set the training and test settrain_dataset1=train_dataset[['thickness','cellsize','cellshape','adhesion','epicellsize','barenuclei','chromatin','normalnucleoli','mitoses']]test_dataset1=test_dataset[['thickness','cellsize','cellshape','adhesion','epicellsize','barenuclei','chromatin','normalnucleoli','mitoses']]
In [34]:
# Generate the stats for each column to be used for normalizationtrain_stats=train_dataset1.describe()train_stats=train_stats.transpose()train_stats
# Set the target variables as 0 or 1train_labels[train_labels==2]=0# benigntrain_labels[train_labels==4]=1# malignanttest_labels[test_labels==2]=0# benigntest_labels[test_labels==4]=1# malignant
In [0]:
# Normalize by subtracting mean and dividing by standard deviationdefnormalize(x):return(x-train_stats['mean'])/train_stats['std']# Convert columns to numerictrain_dataset1=train_dataset1.apply(pd.to_numeric)test_dataset1=test_dataset1.apply(pd.to_numeric)# Normalizenormalized_train_data=normalize(train_dataset1)normalized_test_data=normalize(test_dataset1)
In [0]:
# Create a modelmodel=tf.keras.Sequential([keras.layers.Dense(6,activation=tf.nn.relu,input_shape=[len(train_dataset1.keys())]),keras.layers.Dense(9,activation=tf.nn.relu),keras.layers.Dense(6,activation=tf.nn.relu),keras.layers.Dense(1)])# Use the RMSProp optimizeroptimizer=tf.keras.optimizers.RMSprop(0.01)# Since this is binary classification use binary_crossentropymodel.compile(loss='binary_crossentropy',optimizer=optimizer,metrics=['acc'])# Fit a modelhistory=model.fit(normalized_train_data,train_labels,epochs=1000,validation_data=(normalized_test_data,test_labels),verbose=0)
# Plot training and test accuracy plt.plot(history.history['acc'])plt.plot(history.history['val_acc'])plt.title('model accuracy')plt.ylabel('accuracy')plt.xlabel('epoch')plt.legend(['train','test'],loc='upper left')plt.ylim([0.9,1])plt.show()
# Plot training and test lossplt.plot(history.history['loss'])plt.plot(history.history['val_loss'])plt.title('model loss')plt.ylabel('loss')plt.xlabel('epoch')plt.legend(['train','test'],loc='upper left')plt.ylim([0,0.5])plt.show()
2a. Binary classification in Tensorflow -R
This is a simple binary classification problem from UCI Machine Learning repository and deals with data on Breast cancer from the Univ. of Wisconsin Breast Cancer Wisconsin (Diagnostic) Data Set
# Read the data for Breast cancer (Wisconsin)
dataset <- read.csv("https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data")
# Rename the columns
names(dataset) <- c("id","thickness", "cellsize", "cellshape","adhesion","epicellsize",
"barenuclei","chromatin","normalnucleoli","mitoses","class")
# Remove the columns id and class
dataset1 <- subset(dataset, select = -c(id, class))
dataset2 <- na.omit(dataset1)
# Convert the column to numeric
dataset2$barenuclei <- as.numeric(dataset2$barenuclei)
Normalize the data
train_data <-apply(dataset2,2,normalize)
train_labels <- as.matrix(select(dataset,class))
# Set the target variables as 0 or 1 as it binary classification
train_labels[train_labels==2,]=0
train_labels[train_labels==4,]=1
Create the Deep Learning model
model <- keras_model_sequential()
model %>%
layer_dense(units = 6, activation = 'relu', input_shape = dim(train_data)[2]) %>%
layer_dense(units = 9, activation = 'relu') %>%
layer_dense(units = 6, activation = 'relu') %>%
layer_dense(units = 1)
# Since this is a binary classification we use binary cross entropy
model %>% compile(
loss = 'binary_crossentropy',
optimizer = optimizer_rmsprop(),
metrics = c('accuracy') # Metrics is accuracy
)
Fit the model. Use 20% of data for validation
history <- model %>% fit(
train_data, train_labels,
epochs = 30, batch_size = 128,
validation_split = 0.2
)
Plot the accuracy and loss for training and validation data
plot(history)
3. MNIST in Tensorflow – Python
This takes the famous MNIST handwritten digits . It ca be seen that Tensorflow and Keras make short work of this famous problem of the late 1980s
# Download MNIST datamnist=tf.keras.datasets.mnist# Set training and test data and labels(training_images,training_labels),(test_images,test_labels)=mnist.load_data()print(training_images.shape)print(test_images.shape)
# Normalize the images by dividing by 255.0training_images=training_images/255.0test_images=test_images/255.0# Create a Sequential Keras modelmodel=tf.keras.models.Sequential([tf.keras.layers.Flatten(),tf.keras.layers.Dense(1024,activation=tf.nn.relu),tf.keras.layers.Dense(10,activation=tf.nn.softmax)])model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
This post includes a rework of all presentation of ‘Elements of Neural Networks and Deep Learning Parts 1-8 ‘ since my earlier presentations had some missing parts, omissions and some occasional errors. So I have re-recorded all the presentations.
This series of presentation will do a deep-dive into Deep Learning networks starting from the fundamentals. The equations required for performing learning in a L-layer Deep Learning network are derived in detail, starting from the basics. Further, the presentations also discuss multi-class classification, regularization techniques, and gradient descent optimization methods in deep networks methods. Finally the presentations also touch on how Deep Learning Networks can be tuned.
1. Elements of Neural Networks and Deep Learning – Part 1
This presentation introduces Neural Networks and Deep Learning. A look at history of Neural Networks, Perceptrons and why Deep Learning networks are required and concluding with a simple toy examples of a Neural Network and how they compute. This part also includes a small digression on the basics of Machine Learning and how the algorithm learns from a data set
2. Elements of Neural Networks and Deep Learning – Part 2
This presentation takes logistic regression as an example and creates an equivalent 2 layer Neural network. The presentation also takes a look at forward & backward propagation and how the cost is minimized using gradient descent
The implementation of the discussed 2 layer Neural Network in vectorized R, Python and Octave are available in my post ‘Deep Learning from first principles in Python, R and Octave – Part 1‘
3. Elements of Neural Networks and Deep Learning – Part 3
This 3rd part, discusses a primitive neural network with an input layer, output layer and a hidden layer. The neural network uses tanh activation in the hidden layer and a sigmoid activation in the output layer. The equations for forward and backward propagation are derived.
4. Elements of Neural Network and Deep Learning – Part 4
This presentation is a continuation of my 3rd presentation in which I derived the equations for a simple 3 layer Neural Network with 1 hidden layer. In this video presentation, I discuss step-by-step the derivations for a L-Layer, multi-unit Deep Learning Network, with any activation function g(z)
5. Elements of Neural Network and Deep Learning – Part 5
This presentation discusses multi-class classification using the Softmax function. The detailed derivation for the Jacobian of the Softmax is discussed, and subsequently the derivative of cross-entropy loss is also discussed in detail. Finally the final set of equations for a Neural Network with multi-class classification is derived.
6. Elements of Neural Networks and Deep Learning – Part 6
This part discusses initialization methods specifically like He and Xavier. The presentation also focuses on how to prevent over-fitting using regularization. Lastly the dropout method of regularization is also discussed
7. Elements of Neural Networks and Deep Learning – Part 7
This presentation introduces exponentially weighted moving average and shows how this is used in different approaches to gradient descent optimization. The key techniques discussed are learning rate decay, momentum method, rmsprop and adam.
8. Elements of Neural Networks and Deep Learning – Part 8
This last part touches on the method to adopt while tuning hyper-parameters in Deep Learning networks
Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).
This concludes this series of presentations on “Elements of Neural Networks and Deep Learning’