Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket

In my recent post My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI),I had recounted my journey in the domains of of Data Science, Machine Learning (ML), and more recently Deep Learning (DL) all of which are useful while analyzing data. Of late, I have come to the realization that there are many facets to data. And to glean insights from data, Data Science, ML and DL alone are not sufficient and one needs to also have a good handle on linear programming and optimization. My colleague at IBM Research also concurred with this view and told me he had arrived at this conclusion several years ago. While ML & DL are useful and interesting to make inferences and predictions of outputs from input variables, optimization computes the choice of input which result in maximum or minimum. So I made a small course correction and started on a course from India’s own NPTEL Introduction to Linear Programming by Prof G. Srinivasan of IIT Madras. The lectures are delivered with remarkable clarity by the Prof and I am just about halfway through the course (each lecture is of 50-55 min duration) when I decided that I needed to try to formulate and solve some real world Linear Programming problem.

As usual, I turned towards cricket for some appropriate situations, and sure enough it was there in the open. For this LP formulation I take International T20 and IPL, though International ODI will also work equally well.  You can download the associated code and data for this from Github at LP-cricket-analysis

In T20 matches the captain has to make choice of how to rotate bowlers with the aim of restricting the batting side. Conversely, the batsmen need to take advantage of the bowling strength to maximize the runs scored.

Note:
a) A simple and obvious strategy would be
– If the ith bowler’s economy rate is less than the economy rate of the jth bowler i.e.
er_{i} < er_{j} then have bowler ‘i’ to bowl more overs as his/her economy rate is better

b)A better strategy would be to consider the economy rate of each bowler against each batsman. How often we have seen bowlers who have a great bowling average get punished by some batsman, or a bowler who is generally very poor is very effective against a particular batsman. i.e. er_{ij} < er_{ik} where the jth bowler is more effective than the kth bowler against the ith batsman. This now becomes a linear optimization problem as we can have several combinations of number of overs x economy rate for different bowlers and we will have to solve this algorithmically to determine the lowest score for bowling performance or highest score for batting order.

This post uses the latter approach to optimize bowling change and batting lineup.

Let is take a hypothetical situation
Assume there are 3 bowlers – bwlr_{1},bwlr_{2},bwlr_{3}
and there are 4 batsmen – bman_{1},bman_{2},bman_{3},bman_{4}

Let the economy rate er_{ij} be the Economy Rate of the jth bowler to the ith batsman. Also if remaining overs for the bowlers are o_{1},o_{2},o_{3}
and the total number of overs left to be bowled are
o_{1}+o_{2}+o_{3} = N then the question is

a) Given the economy rate of each bowler per batsman, how many overs should each bowler bowl, so that the total runs scored by all the batsmen are minimum?

b) Alternatively, if the know the individual strike rate of a batsman against the individual bowlers, how many overs should each batsman face with a bowler so that the total runs scored is maximized?

1. LP Formulation for bowling order

Let the economy rate 

er_{ij}

 be the Economy Rate of the jth bowler to the ith batsman.
Objective function : Minimize – 

\sum_{i=1}^{i=4}\sum_{j=1}^{i=3}er_{ij}*o_{j}

Where k is the number overs o, remaining for the jth bowler o_{j} <= k_{j} and the total number of overs remaining to be bowled is N then –\sum o_{j} = N Also o_{j} >=0
The overs that any bowler can bowl can be >=0

2. LP Formulation for batting lineup

Where k is the number overs o, remaining for the jth bowler o_{j} <= k_{j} and the total number of overs remaining to be bowled is N then –\sum o_{j} = N Also o_{j} >=0
The overs that any bowler can bowl can be >= 0 or any number that the bowler has already bowled.

For this maximization and minimization problem I used lpSolveAPI.

3. LP formulation (Example 1)

Initially I created a test example to ensure that I get the LP formulation and solution correct. Here the er1=4 and er2=3 and o1 & o2 are the overs bowled by bowlers 1 & 2. Also o1+o2=4 In this example as below

o1 o2 Obj Fun(=4o1+3o2)
1    3      13
2    2      14
3    1      15

library(lpSolveAPI)
library(dplyr)
library(knitr)
lprec <- make.lp(0, 2)
a <-lp.control(lprec, sense="min")
set.objfn(lprec, c(4, 3))  # Economy Rate of 4 and 3 for er1 and er2
add.constraint(lprec, c(1, 1), "=",4)  # o1 + o2 =4
add.constraint(lprec, c(1, 0), ">",1)  # o1 > 1
add.constraint(lprec, c(0, 1), ">",1)  # o2 > 1
lprec
## Model name: 
##             C1    C2       
## Minimize     4     3       
## R1           1     1   =  4
## R2           1     0  >=  1
## R3           0     1  >=  1
## Kind       Std   Std       
## Type      Real  Real       
## Upper      Inf   Inf       
## Lower        0     0
b <-solve(lprec)
get.objective(lprec) # 13
## [1] 13
get.variables(lprec) # 1    3 
## [1] 1 3

Note 1: In the above example 13 runs is the minimum that can be scored and this requires

  • o1=1
  • o2=3

Note 2:The numbers in the columns represent the number of overs that need to be bowled by a bowler to the corresponding batsman.

4. LP formulation (Example 2)

In this formulation there are 2 bowlers and 2 batsmen o11,o12 are the oves bowled by bowler 1 to batsmen 1 & 2 and o21, o22 are the overs bowled by bowler 2 to batsmen 1 & 2 er11=4, er12=2,er21=2,er22=5 o11+o12+o21+o22=5

The solution for this manually computed is B1 B2 B1 B2 Runs
1 1 1 2 18
1 2 1 1 15
2 1 1 1 17
1 1 2 1 15

lprec <- make.lp(0, 4)
a <-lp.control(lprec, sense="min")
set.objfn(lprec, c(4, 2,2,5))
add.constraint(lprec, c(1, 1,0,0), "<=",8)
add.constraint(lprec, c(0, 0,1,1), "<=",7)
add.constraint(lprec, c(1, 1,1,1), "=",5)
add.constraint(lprec, c(1, 0,0,0), ">",1)
add.constraint(lprec, c(0, 1,0,0), ">",1)
add.constraint(lprec, c(0, 0,1,0), ">",1)
add.constraint(lprec, c(0, 0,0,1), ">",1)
lprec
## Model name: 
##             C1    C2    C3    C4       
## Minimize     4     2     2     5       
## R1           1     1     0     0  <=  8
## R2           0     0     1     1  <=  7
## R3           1     1     1     1   =  5
## R4           1     0     0     0  >=  1
## R5           0     1     0     0  >=  1
## R6           0     0     1     0  >=  1
## R7           0     0     0     1  >=  1
## Kind       Std   Std   Std   Std       
## Type      Real  Real  Real  Real       
## Upper      Inf   Inf   Inf   Inf       
## Lower        0     0     0     0
b<-solve(lprec)
get.objective(lprec) 
## [1] 15
get.variables(lprec) 
## [1] 1 2 1 1

Note: In the above example 15 runs is the minimum that can be scored and this requires

  • o11=1
  • o12=2
  • o21=1
  • o22=1

It is possible to keep the minimum to other values and solves also.

5. LP formulation for International T20 India vs Australia (Batting lineup)

To analyze batting and bowling lineups in the cricket world I needed to get the ball-by-ball details of runs scored by each batsman against each of the bowlers. Fortunately I had already created this with my R package yorkr. yorkr processes yaml data from Cricsheet. So I copied the data of all matches between Australia and India in International T20s. You can download my processed data for International T20 at Inswinger

load("Australia-India-allMatches.RData")
dim(matches)
## [1] 3541   25

The following functions compute the ‘Strike Rate’ of a batsman as

SR=1/overs∗∑RunsScoredSR=1/overs∗∑RunsScored

Also the Economy Rate is computed as

ER=1/overs∗∑RunsConcededER=1/overs∗∑RunsConcededIncidentally the SR=ER

# Compute the Strike Rate of the batsman
computeSR <- function(batsman1,bowler1){
    a <- matches %>% filter(batsman==batsman1 & bowler==bowler1) 
    a1 <- a %>% summarize(totalRuns=sum(runs),count=n()) %>% mutate(SR=(totalRuns/count)*6)
    a1
}

# Compute the Economy Rate of the batsman
computeER <- function(batsman1,bowler1){
    a <- matches %>% filter(batsman==batsman1 & bowler==bowler1) 
    a1 <- a %>% summarize(totalRuns=sum(runs),count=n()) %>% mutate(ER=(totalRuns/count)*6)
    a1
}

Here I compute the Strike Rate of Virat Kohli, Yuvraj Singh and MS Dhoni against Shane Watson, Brett Lee and MA Starc

 # Kohli
kohliWatson<- computeSR("V Kohli","SR Watson")
kohliWatson
##   totalRuns count       SR
## 1        45    37 7.297297
kohliLee <- computeSR("V Kohli","B Lee")
kohliLee
##   totalRuns count       SR
## 1        10     7 8.571429
kohliStarc <- computeSR("V Kohli","MA Starc")
kohliStarc
##   totalRuns count       SR
## 1        11     9 7.333333
# Yuvraj
yuvrajWatson<- computeSR("Yuvraj Singh","SR Watson")
yuvrajWatson
##   totalRuns count       SR
## 1        24    22 6.545455
yuvrajLee <- computeSR("Yuvraj Singh","B Lee")
yuvrajLee
##   totalRuns count       SR
## 1        12     7 10.28571
yuvrajStarc <- computeSR("Yuvraj Singh","MA Starc")
yuvrajStarc
##   totalRuns count SR
## 1        12     8  9
# MS Dhoni
dhoniWatson<- computeSR("MS Dhoni","SR Watson")
dhoniWatson
##   totalRuns count       SR
## 1        33    28 7.071429
dhoniLee <- computeSR("MS Dhoni","B Lee")
dhoniLee
##   totalRuns count  SR
## 1        26    20 7.8
dhoniStarc <- computeSR("MS Dhoni","MA Starc")
dhoniStarc
##   totalRuns count   SR
## 1        11     8 8.25

When we consider the batting lineup, the problem is one of maximization. Formulating and solving

# 3 batsman x 3 bowlers
lprec <- make.lp(0, 9)
# Maximization
a<-lp.control(lprec, sense="max")

# Set the objective function
set.objfn(lprec, c(kohliWatson$SR, kohliLee$SR,kohliStarc$SR,
                   yuvrajWatson$SR,yuvrajLee$SR,yuvrajStarc$SR,
                   dhoniWatson$SR,dhoniLee$SR,dhoniStarc$SR))

#Assume the  bowlers have 3,4,3 overs left respectively
add.constraint(lprec, c(1, 1,1,0,0,0, 0,0,0), "<=",3)
add.constraint(lprec, c(0,0,0,1,1,1,0,0,0), "<=",4)
add.constraint(lprec, c(0,0,0,0,0,0,1,1,1), "<=",3)
#o11+o12+o13+o21+o22+o23+o31+o32+o33=8 (overs remaining)
add.constraint(lprec, c(1,1,1,1,1,1,1,1,1), "=",8) 


add.constraint(lprec, c(1,0,0,0,0,0,0,0,0), ">=",1) #o11 >=1
add.constraint(lprec, c(0,1,0,0,0,0,0,0,0), ">=",0) #o12 >=0
add.constraint(lprec, c(0,0,1,0,0,0,0,0,0), ">=",0) #o13 >=0
add.constraint(lprec, c(0,0,0,1,0,0,0,0,0), ">=",1) #o21 >=1
add.constraint(lprec, c(0,0,0,0,1,0,0,0,0), ">=",1) #o22 >=1
add.constraint(lprec, c(0,0,0,0,0,1,0,0,0), ">=",0) #o23 >=0
add.constraint(lprec, c(0,0,0,0,0,0,1,0,0), ">=",1) #o31 >=1
add.constraint(lprec, c(0,0,0,0,0,0,0,1,0), ">=",0) #o32 >=0
add.constraint(lprec, c(0,0,0,0,0,0,0,0,1), ">=",0) #o33 >=0

lprec
## Model name: 
##   a linear program with 9 decision variables and 13 constraints
b <-solve(lprec)
get.objective(lprec) #  
## [1] 68.91418
get.variables(lprec) # 
## [1] 1 2 0 1 3 0 1 0 0

This shows that the maximum runs that can be scored for the current strike rate is 81.9 runs in 8 overs The breakup is as follows

Batsman Watson B Lee MA Starc

Kohli 1 1 1
Yuvraj 0 3 0
Dhoni 2 0 0
Overs 3 4 1
Total=8

This is also shown below

e <- as.data.frame(rbind(c(1,1,1),c(0,3,0),c(2,0,0),c(3,4,1)))
names(e) <- c("S Watson","B Lee","MA Starc")
rownames(e) <- c("Kohli","Yuvraj","Dhoni","Overs")
e
##        S Watson B Lee MA Starc
## Kohli         1     1        1
## Yuvraj        0     3        0
## Dhoni         2     0        0
## Overs         3     4        1

Note: This assumes that the batsmen perform at their current Strike Rate. Howvever anything can happen in a real game, but nevertheless this is a fairly reasonable estimate of the performance

Note 2:The numbers in the columns represent the number of overs that need to be bowled by a bowler to the corresponding batsman.

6. LP formulation for International T20 India vs Australia (Bowling lineup)

For this I compute how the bowling should be rotated between R Ashwin, RA Jadeja and JJ Bumrah when taking into account their performance against batsmen like Shane Watson, AJ Finch and David Warner. For the bowling performance I take the Economy rate of the bowlers. The data is the same as above

computeSR <- function(batsman1,bowler1){
    a <- matches %>% filter(batsman==batsman1 & bowler==bowler1) 
    a1 <- a %>% summarize(totalRuns=sum(runs),count=n()) %>% mutate(SR=(totalRuns/count)*6)
    a1
}
# RA Jadeja
jadejaWatson<- computeER("SR Watson","RA Jadeja")
jadejaWatson
##   totalRuns count       ER
## 1        60    29 12.41379
jadejaFinch <- computeER("AJ Finch","RA Jadeja")
jadejaFinch
##   totalRuns count       ER
## 1        36    33 6.545455
jadejaWarner <- computeER("DA Warner","RA Jadeja")
jadejaWarner
##   totalRuns count       ER
## 1        23    11 12.54545
# Ashwin
ashwinWatson<- computeER("SR Watson","R Ashwin")
ashwinWatson
##   totalRuns count       ER
## 1        41    26 9.461538
ashwinFinch <- computeER("AJ Finch","R Ashwin")
ashwinFinch
##   totalRuns count   ER
## 1        63    36 10.5
ashwinWarner <- computeER("DA Warner","R Ashwin")
ashwinWarner
##   totalRuns count       ER
## 1        38    28 8.142857
# JJ Bunrah
bumrahWatson<- computeER("SR Watson","JJ Bumrah")
bumrahWatson
##   totalRuns count  ER
## 1        22    20 6.6
bumrahFinch <- computeER("AJ Finch","JJ Bumrah")
bumrahFinch
##   totalRuns count       ER
## 1        25    19 7.894737
bumrahWarner <- computeER("DA Warner","JJ Bumrah")
bumrahWarner
##   totalRuns count ER
## 1         2     4  3

Formulating solving the bowling lineup is shown below

lprec <- make.lp(0, 9)
a <-lp.control(lprec, sense="min")

# Set the objective function
set.objfn(lprec, c(jadejaWatson$ER, jadejaFinch$ER,jadejaWarner$ER,
                   ashwinWatson$ER,ashwinFinch$ER,ashwinWarner$ER,
                   bumrahWatson$ER,bumrahFinch$ER,bumrahWarner$ER))

add.constraint(lprec, c(1, 1,1,0,0,0, 0,0,0), "<=",4) # Jadeja has 4 overs
add.constraint(lprec, c(0,0,0,1,1,1,0,0,0), "<=",3)   # Ashwin has 3 overs left
add.constraint(lprec, c(0,0,0,0,0,0,1,1,1), "<=",4)   # Bumrah has 4 overs left
add.constraint(lprec, c(1,1,1,1,1,1,1,1,1), "=",10) # Total overs = 10
add.constraint(lprec, c(1,0,0,0,0,0,0,0,0), ">=",1)
add.constraint(lprec, c(0,1,0,0,0,0,0,0,0), ">=",0)
add.constraint(lprec, c(0,0,1,0,0,0,0,0,0), ">=",1)
add.constraint(lprec, c(0,0,0,1,0,0,0,0,0), ">=",0)
add.constraint(lprec, c(0,0,0,0,1,0,0,0,0), ">=",1)
add.constraint(lprec, c(0,0,0,0,0,1,0,0,0), ">=",0)
add.constraint(lprec, c(0,0,0,0,0,0,1,0,0), ">=",0)
add.constraint(lprec, c(0,0,0,0,0,0,0,1,0), ">=",1)
add.constraint(lprec, c(0,0,0,0,0,0,0,0,1), ">=",0)

lprec
## Model name: 
##   a linear program with 9 decision variables and 13 constraints
b <-solve(lprec)
get.objective(lprec) #  
## [1] 73.58775
get.variables(lprec) # 
## [1] 1 2 1 0 1 1 0 1 3

The minimum runs that will be conceded by these 3 bowlers in 10 overs is 73.58 assuming the bowling is rotated as follows

e <- as.data.frame(rbind(c(1,0,0),c(2,1,1),c(1,1,3),c(4,2,4)))
names(e) <- c("RA Jadeja","R Ashwin","JJ Bumrah")
rownames(e) <- c("S Watson","AJ Finch","DA Warner","Overs")
e 
##           RA Jadeja R Ashwin JJ Bumrah
## S Watson          1        0         0
## AJ Finch          2        1         1
## DA Warner         1        1         3
## Overs             4        2         4
#Total overs=10  

7. LP formulation for IPL (Mumbai Indians – Kolkata Knight Riders – Bowling lineup)

As in the case of International T20s I also have processed IPL data derived from my R package yorkr. yorkr. yorkr processes yaml data from Cricsheet. The processed data for all IPL matches can be downloaded from GooglyPlus

load("Mumbai Indians-Kolkata Knight Riders-allMatches.RData")
dim(matches)
## [1] 4237   25
# Compute the Economy Rate of the batsman

# Gambhir
gambhirMalinga <- computeER("G Gambhir","SL Malinga")
gambhirHarbhajan <- computeER("G Gambhir","Harbhajan Singh")
gambhirPollard <- computeER("G Gambhir","KA Pollard")

#Yusuf Pathan
yusufMalinga <- computeER("YK Pathan","SL Malinga")
yusufHarbhajan <- computeER("YK Pathan","Harbhajan Singh")
yusufPollard <- computeER("YK Pathan","KA Pollard")

#JH Kallis
kallisMalinga <- computeER("JH Kallis","SL Malinga")
kallisHarbhajan <- computeER("JH Kallis","Harbhajan Singh")
kallisPollard <- computeER("JH Kallis","KA Pollard")

#RV Uthappa
uthappaMalinga <- computeER("RV Uthappa","SL Malinga")
uthappaHarbhajan <- computeER("RV Uthappa","Harbhajan Singh")
uthappaPollard <- computeER("RV Uthappa","KA Pollard")

Formulating and solving this for the bowling lineup of Mumbai Indians against Kolkata Knight Riders

 library("lpSolveAPI")
 lprec <- make.lp(0, 12)
 a=lp.control(lprec, sense="min")
 
 set.objfn(lprec, c(gambhirMalinga$ER, yusufMalinga$ER,kallisMalinga$ER,uthappaMalinga$ER,
                    gambhirHarbhajan$ER,yusufHarbhajan$ER,kallisHarbhajan$ER,uthappaHarbhajan$ER,
                    gambhirPollard$ER,yusufPollard$ER,kallisPollard$ER,uthappaPollard$ER))
 
 add.constraint(lprec, c(1,1,1,1, 0,0,0,0, 0,0,0,0), "<=",4)
 add.constraint(lprec, c(0,0,0,0,1,1,1,1,0,0,0,0), "<=",4)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,1,1,1,1), "<=",4)
 add.constraint(lprec, c(1,1,1,1,1,1,1,1,1,1,1,1), "=",10)
 
 add.constraint(lprec, c(1,0,0,0,0,0,0,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,1,0,0,0,0,0,0,0,0,0,0), ">=",1)
 add.constraint(lprec, c(0,0,1,0,0,0,0,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,1,0,0,0,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,1,0,0,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,0,1,0,0,0,0,0,0), ">=",1)
 add.constraint(lprec, c(0,0,0,0,0,0,1,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,0,0,0,1,0,0,0,0), ">=",1)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,1,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,0,1,0,0), ">=",1)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,0,0,1,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,0,0,0,1), ">=",0)
 
 lprec
## Model name: 
##   a linear program with 12 decision variables and 16 constraints
 b=solve(lprec)
 get.objective(lprec) #  
## [1] 55.57887
 get.variables(lprec) # 
##  [1] 3 1 0 0 0 1 0 1 3 1 0 0
e <- as.data.frame(rbind(c(3,1,0,0,4),c(0, 1, 0,1,2),c(3, 1, 0,0,4)))
names(e) <- c("Gambhir","Yusuf","Kallis","Uthappa","Overs")
rownames(e) <- c("Malinga","Harbhajan","Pollard") 
e
##           Gambhir Yusuf Kallis Uthappa Overs
## Malinga         3     1      0       0     4
## Harbhajan       0     1      0       1     2
## Pollard         3     1      0       0     4
#Total overs=10  

8. LP formulation for IPL (Mumbai Indians – Kolkata Knight Riders – Batting lineup)

As I mentioned it is possible to perform a maximation with the same formulation since computeSR<==>computeER

This just flips the problem around and computes the maximum runs that can be scored for the batsman’s Strike rate (this is same as the bowler’s Economy rate)

 library("lpSolveAPI")
 lprec <- make.lp(0, 12)
 a=lp.control(lprec, sense="max")
 
 a <-set.objfn(lprec, c(gambhirMalinga$ER, yusufMalinga$ER,kallisMalinga$ER,uthappaMalinga$ER,
                    gambhirHarbhajan$ER,yusufHarbhajan$ER,kallisHarbhajan$ER,uthappaHarbhajan$ER,
                    gambhirPollard$ER,yusufPollard$ER,kallisPollard$ER,uthappaPollard$ER))
 
 
 add.constraint(lprec, c(1,1,1,1, 0,0,0,0, 0,0,0,0), "<=",4)
 add.constraint(lprec, c(0,0,0,0,1,1,1,1,0,0,0,0), "<=",4)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,1,1,1,1), "<=",4)
 add.constraint(lprec, c(1,1,1,1,1,1,1,1,1,1,1,1), "=",11)
 
 add.constraint(lprec, c(1,0,0,0,0,0,0,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,1,0,0,0,0,0,0,0,0,0,0), ">=",1)
 add.constraint(lprec, c(0,0,1,0,0,0,0,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,1,0,0,0,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,1,0,0,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,0,1,0,0,0,0,0,0), ">=",1)
 add.constraint(lprec, c(0,0,0,0,0,0,1,0,0,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,0,0,0,1,0,0,0,0), ">=",1)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,1,0,0,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,0,1,0,0), ">=",1)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,0,0,1,0), ">=",0)
 add.constraint(lprec, c(0,0,0,0,0,0,0,0,0,0,0,1), ">=",0)
 lprec
## Model name: 
##   a linear program with 12 decision variables and 16 constraints
 b=solve(lprec)
 get.objective(lprec) #  
## [1] 94.22649
 get.variables(lprec) # 
##  [1] 0 3 0 0 0 1 0 3 0 1 3 0
e <- as.data.frame(rbind(c(0,3,0,0,3),c(0, 1, 0,3,4),c(0, 1, 3,0,4)))
names(e) <- c("Gambhir","Yusuf","Kallis","Uthappa","Overs")
rownames(e) <- c("Malinga","Harbhajan","Pollard") 
e
##           Gambhir Yusuf Kallis Uthappa Overs
## Malinga         0     3      0       0     3
## Harbhajan       0     1      0       3     4
## Pollard         0     1      3       0     4
#Total overs=11  

Conclusion: It is possible to thus determine the optimum no of overs to give to a specific bowler based on his/her Economy Rate with a particular batsman. Similarly one can determine the maximim runs that can be scored by a batsmen based on their strike rate with bowlers. However, while this may provide some indication a cricket like any other game depends on a fair amount of chance.

My 2 video presentations on ‘Essential Python for Datascience’

Here, in this post I include 2 sessions on ‘Essential Python for Datascience’. These 2 presentations cover the most important features of the Python language with which you can hit the ground running in datascience. All  the related material for these sessions can be cloned/downloaded from Github at ‘EssentialPythonForDatascience

1. Essential Python for Datascience -1
In this  video presentation I cover basic data types like tuples,lists, dictionaries. How to get the type of a variable, subsetting and numpy arrays. Some basic operations on numpy arrays, slicing is also covered

2. Essential Python for Datascience -2
In the 2nd part I cover Pandas, pandas Series, dataframes, how to subset dataframes using iloc,loc, selection of specific columns, filtering dataframes by criteria etc. Other operations include group_by, apply,agg. Lastly I also touch upon matplotlib.

This is no means an exhaustive coverage of the multitude of features available in Python but can provide as a good starting point for those venturing into datascience with Python.

Good luck with Python!

Also see
1. My 3 video presentations on “Essential R”
2. Neural Networks: The mechanics of backpropagation
3. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
4. Deblurring with OpenCV: Weiner filter reloaded
5. GooglyPlus: yorkr analyzes IPL players, teams, matches with plots and table

To see all posts see Index of posts

My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)

Then felt I like some watcher of the skies 
When a new planet swims into his ken; 
Or like stout Cortez when with eagle eyes 
He star’d at the Pacific—and all his men 
Look’d at each other with a wild surmise— 
Silent, upon a peak in Darien. 
On First Looking into Chapman’s Homer by John Keats

The above excerpt from John Keat’s poem captures the the exhilaration that one experiences, when discovering something for the first time. This also  summarizes to some extent my own as enjoyment while pursuing Data Science, Machine Learning and the like.

I decided to write this post, as occasionally youngsters approach me and ask me where they should start their adventure in Data Science & Machine Learning. There are other times, when the ‘not-so-youngsters’ want to know what their next step should be after having done some courses. This post includes my travels through the domains of Data Science, Machine Learning, Deep Learning and (soon to be done AI).

By no means, am I an authority in this field, which is ever-widening and almost bottomless, yet I would like to share some of my experiences in this fascinating field. I include a short review of the courses I have done below. I also include alternative routes through  courses which I did not do, but are probably equally good as well.  Feel free to pick and choose any course or set of courses. Alternatively, you may prefer to read books or attend bricks-n-mortar classes, In any case,  I hope the list below will provide you with some overall direction.

All my learning in the above domains have come from MOOCs and I restrict myself to the top 3 MOOCs, or in my opinion, ‘the original MOOCs’, namely Coursera, edX or Udacity, but may throw in some courses from other online sites if they are only available there. I would recommend these 3 MOOCs over the other numerous online courses and also over face-to-face classroom courses for the following reasons. These MOOCs

  • Are taken by world class colleges and the lectures are delivered by top class Professors who have a great depth of knowledge and a wealth of experience
  • The Professors, besides delivering quality content, also point out to important tips, tricks and traps
  • You can revisit lectures in online courses anytime to refresh your memory
  • Lectures are usually short between 8 -15 mins (Personally, my attention span is around 15-20 mins at a time!)

Here is a fair warning and something quite obvious. No amount of courses, lectures or books will help if you don’t put it to use through some language like Octave, R or Python.

The journey
My trip through Data Science, Machine Learning  started with an off-chance remark,about 3 years ago,  from an old friend of mine who spoke to me about having done a few  courses at Coursera, and really liked it.  He further suggested that I should try. This was the final push which set me sailing into this vast domain.

I have included the list of the courses I have done over the past 5 years (37+ certifications completed and another 9 audited-listened only without doing the assignments). For each of the courses I have included a short review of the course, whether I think the course is mandatory, the language in which the course is based on, and finally whether I have done the course myself etc. I have also included alternative courses, which I may have not done, but which I think are equally good. Finally, I suggest some courses which I have heard of and which are very good and worth taking.

1. Machine Learning, Stanford, Prof Andrew Ng, Coursera
(Requirement: Mandatory, Language:Octave,Status:Completed)
This course provides an excellent foundation to build your Machine Learning citadel on. The course covers the mathematical details of linear, logistic and multivariate regression. There is also a good coverage of topics like Neural Networks, SVMs, Anamoly Detection, underfitting, overfitting, regularization etc. Prof Andrew Ng presents the material in a very lucid manner. It is a great course to start with. It would be a good idea to brush up  some basics of linear algebra, matrices and a little bit of calculus, specifically computing the local maxima/minima. You should be able to take this course even if you don’t know Octave as the Prof goes over the key aspects of the language.

2. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford– (Requirement:Mandatory, Language:R, Status;Completed) –
The course includes linear and polynomial regression, logistic regression. Details also include cross-validation and the bootstrap methods, how to do model selection and regularization (ridge and lasso). It also touches on non-linear models, generalized additive models, boosting and SVMs. Some unsupervised learning methods are  also discussed. The 2 Professors take turns in delivering lectures with a slight touch of humor.

3a. Data Science Specialization: Prof Roger Peng, Prof Brian Caffo & Prof Jeff Leek, John Hopkins University (Requirement: Option A, Language: R Status: Completed)
This is a comprehensive 10 module specialization based on R. This Specialization gives a very broad overview of Data Science and Machine Learning. The modules cover R programming, Statistical Inference, Practical Machine Learning, how to build R products and R packages and finally has a very good Capstone project on NLP

3b. Applied Data Science with Python Specialization: University of Michigan (Requirement: Option B, Language: Python, Status: Not done)
In this specialization I only did  the Applied Machine Learning in Python (Prof Kevyn-Collin Thomson). This is a very good course that covers a lot of Machine Learning algorithms(linear, logistic, ridge, lasso regression, knn, SVMs etc. Also included are confusion matrices, ROC curves etc. This is based on Python’s Scikit Learn

3c. Machine Learning Specialization, University Of Washington (Requirement:Option C, Language:Python, Status : Not completed). This appears to be a very good Specialization in Python

4. Statistics with R Specialization, Duke University (Requirement: Useful and a must know, Language R, Status:Not Completed)
I audited (listened only) to the following 2 modules from this Specialization.
a.Inferential Statistics
b.Linear Regression and Modeling
Both these courses are taught by Prof Mine Cetikya-Rundel who delivers her lessons with extraordinary clarity.  Her lectures are filled with many examples which she walks you through in great detail

5.Bayesian Statistics: From Concept to Data Analysis: Univ of California, Santa Cruz (Requirement: Optional, Language : R, Status:Completed)
This is an interesting course and provides an alternative point of view to frequentist approach

6. Data Science and Engineering with Spark, University of California, Berkeley, Prof Antony Joseph, Prof Ameet Talwalkar, Prof Jon Bates
(Required: Mandatory for Big Data, Status:Completed, Language; pySpark)
This specialization contains 3 modules
a.Introduction to Apache Spark
b.Distributed Machine Learning with Apache Spark
c.Big Data Analysis with Apache Spark

This is an excellent course for those who want to make an entry into Distributed Machine Learning. The exercises are fairly challenging and your code will predominantly be made of map/reduce and lambda operations as you process data that is distributed across Spark RDDs. I really liked  the part where the Prof shows how a matrix multiplication on a single machine is of the order of O(nd^2+d^3) (which is the basis of Machine Learning) is reduced to O(nd^2) by taking outer products on data which is distributed.

7. Deep Learning Prof Andrew Ng, Younes Bensouda Mourri, Kian Katanforoosh : Requirement:Mandatory,Language:Python, Tensorflow Status:Completed)

This course had 5 Modules which start from the fundamentals of Neural Networks, their derivation and vectorized Python implementation. The specialization also covers regularization, optimization techniques, mini batch normalization, Convolutional Neural Networks, Recurrent Neural Networks, LSTMs applied to a wide variety of real world problems

The modules are
a. Neural Networks and Deep Learning
In this course Prof Andrew Ng explains differential calculus, linear algebra and vectorized Python implementations of Deep Learning algorithms. The derivation for back-propagation is done and then the Prof shows how to compute a multi-layered DL network
b.Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Deep Neural Networks can be very flexible, and come with a lots of knobs (hyper-parameters) to tune with. In this module, Prof Andrew Ng shows a systematic way to tune hyperparameters and by how much should one tune. The course also covers regularization(L1,L2,dropout), gradient descent optimization and batch normalization methods. The visualizations used to explain the momentum method, RMSprop, Adam,LR decay and batch normalization are really powerful and serve to clarify the concepts. As an added bonus,the module also includes a great introduction to Tensorflow.
c.Structuring Machine Learning Projects
A very good module with useful tips, tricks and traps that need to be considered while working on Machine Learning and Deep Learning projects
d. Convolutional Neural Networks
This domain has a lot of really cool ideas, where images represented as 3D volumes, are compressed and stretched longitudinally before applying a multi-layered deep learning neural network to this thin slice for performing classification,detection etc. The Prof provides a glimpse into this fascinating world of image classification, detection andl neural art transfer with frameworks like Keras and Tensorflow.
e. Sequence Models
In this module covers in good detail concepts like RNNs, GRUs, LSTMs, word embeddings, beam search and attention model.

8. Neural Networks for Machine Learning, Prof Geoffrey Hinton,University of Toronto
(Requirement: Mandatory, Language;Octave, Status:Completed)
This is a broad course which starts from the basic of Perceptrons, all the way to Boltzman Machines, RNNs, CNNS, LSTMs etc The course also covers regularisation, learning rate decay, momentum method etc

9.Probabilistic Graphical Models, Stanford  Prof Daphne Koller(Language:Octave, Status: Partially completed)
This has 3 courses
a.Probabilistic Graphical Models 1: Representation – Done
b.Probabilistic Graphical Models 2: Inference – To do
c.Probabilistic Graphical Models 3: Learning – To do
This course discusses how a system, which can be represented as a complex interaction
of probability distributions, will behave. This is probably the toughest course I did.  I did manage to get through the 1st module, While I felt that grasped a few things, I did not wholly understand the import of this. However I feel this is an important domain and I will definitely revisit this in future

10. Reinforcement Specialization : University of Alberta, Prof Adam White and Prof Martha White
(Requirement: Very important, Language;Python, Status: Partially Completed)
This is a set of 4 courses. I did the first 2 of the 4. Reinforcement Learning appears deceptively simple, but it is anything but simple. Definitely a very critical area to learn.

a.Fundamentals of Reinforcement Learning: This course discusses Markov models, value functions and Bellman equations and dynamic programming.
b.Sample based learning Learning methods: This course touches on Monte Carlo methods, Temporal Difference methods, Q Learning etc.

Reinforcement Learning is a must-have in your AI arsenal.

11. Tensorflow in Practice Specialization – Prof Laurence Moroney – Deep Learning.AI
(Requirement: Important, Language;Python, Status: Completed)
This is a good course but definitely do the Deep Learning Specialization by Prof Andrew Ng
There are 4 courses in this Specialization. I completed all 4 courses. They are fairly straight forward
a. Introduction to TensorFlow – This course introduces you to Tensorflow, image recognition with brute-force method
b. Convolutional Neural Networks in Tensorflow – This course touches on how to build a CNN, image augmentation, transfer learning and multi-class classification
c. Natural Language Processing in Tensorflow – Word embeddings, sentiment analysis, LSTMs, RNNs are discussed.
d. Sequences, time series and prediction – This course discusses using RNNs for time series, auto correlation

12. Natural Language Processing  Specialization – Prof Younes Bensouda, Lukasz Kaiser from DeepLearning.AI
(Requirement: Very Important, Language;Python, Status: Partially Completed)
This is the latest specialization from Deep Learning.AI. I have completed the first 2 courses
a.Natural Language Processing with Classification and Vector Spaces -The first course deals with sentiment analysis with Naive Bayes, vector space models, capturing dependencies using PCA etc
b. Natural Language Processing with Probabilistic Models – In this course techniques for auto correction, Markov models and Viterbi algorithm for Parts of Speech tagging, auto completion and word embedding are discussed.

13. Mining Massive Data Sets Prof Jure Leskovec, Prof Anand Rajaraman and ProfJeff Ullman. Online Stanford, Status Partially done.,
I did quickly audit this course, a year back, when it used to be in Coursera. It now seems to have moved to Stanford online. But this is a very good course that discusses key concepts of Mining Big Data of the order a few Petabytes

14. Introduction to Artificial Intelligence, Prof Sebastian Thrun & Prof Peter Norvig, Udacity
This is a really good course. I have started on this course a couple of times and somehow gave up. Will revisit to complete in future. Quite extensive in its coverage.Touches BFS,DFS, A-Star, PGM, Machine Learning etc.

15.Deep Learning (with TensorFlow), Vincent Vanhoucke, Principal Scientist at Google Brain.
Got started on this one and abandoned some time back. In my to do list though

16. Generative AI with LLMs

My learning journey is based on Lao Tzu’s dictum of ‘A good traveler has no fixed plans and is not intent on arriving’. You could have a goal and try to plan your courses accordingly.
And so my journey continues…

I hope you find this list useful.
Have a great journey ahead!!!