GooglyPlusPlus: Computing T20 player’s Win Probability Contribution

In this post, I compute each batsman’s or bowler’s Win Probability Contribution (WPC) in a T20 match. This metric captures by how much the player (batsman or bowler) changed/impacted the Win Probability of the T20 match. For this computation I use my machine learning models, I had created earlier, which predicts the ball-by-ball win probability as the T20 match progresses through the 2 innings of the match.

In the picture snippet below, you can see how the win probability changes ball-by-ball for each batsman for a T20 match between CSK vs LSG- 31 Mar 2022

In my previous posts I had created several Machine Learning models. In order to compute the player’s Win Probability contribution in this post, I have used the following ML models

The batsman’s or bowler’s win probability contribution changes ball-by=ball. The player’s contribution is calculated as the difference in win probability when the batsman faces the 1st ball in his innings and the last ball either when is out or the innings comes to an end. If the difference is +ve the the player has had a positive impact, and likewise for negative contribution. Similarly, for a bowler, it is the win probability when he/she comes into bowl till, the last delivery he/she bowls

Note: The Win Probability Contribution does not have any relation to the how much runs or at what strike rate the batsman scored the runs. Rather the model computes different win probability for each player, based on his/her embedding, the ball in the innings and six other feature vectors like runs, run rate, runsMomentum etc. These values change for every ball as seen in the table above. Also, this is not continuous. The 2 ML models determine the Win Probability for a specific player, ball and the context in the match.

This metric is similar to Win Probability Added (WPA) used in Sabermetrics for baseball. Here is the definition of WPA from Fangraphs “Win Probability Added (WPA) captures the change in Win Expectancy from one plate appearance to the next and credits or debits the player based on how much their action increased their team’s odds of winning.” This article in Fangraphs explains in detail how this computation is done.

In this post I have added 4 new function to my R package yorkr.

  • batsmanWinProbLR – batsman’s win probability contribution based on glmnet (Logistic Regression)
  • bowlerWinProbLR – bowler’s win probability contribution based on glmnet (Logistic Regression)
  • batsmanWinProbDL – batsman’s win probability contribution based on Deep Learning Model
  • bowlerWinProbDL – bowlerWinProbLR – bowler’s win probability contribution based on Deep Learning

Hence there are 4 additional features in GooglyPlusPlus based on the above 4 functions. In addition I have also updated

-winProbLR (overLap) function to include the names of batsman when they come to bat and when they get out or the innings comes to an end, based on Logistic Regression

-winProbDL(overLap) function to include the names of batsman when they come to bat and when they get out based on Deep Learning

Hence there are 6 new features in this version of GooglyPlusPlus.

Note: All these new 6 features are available for all 9 formats of T20 in GooglyPlusPlus namely

a) IPL b) BBL c) NTB d) PSL e) Intl, T20 (men) f) Intl. T20 (women) g) WBB h) CSL i) SSM

Check out the latest version of GooglyPlusPlus at gpp2023-2

Note: The data for GooglyPlusPlus comes from Cricsheet and the Shiny app is based on my R package yorkr

A) Chennai SuperKings vs Delhi Capitals – 04 Oct 2021

To understand Win Probability Contribution better let us look at Chennai Super Kings vs Delhi Capitals match on 04 Oct 2021

This was closely fought match with fortunes swinging wildly. If we take a look at the Worm wicket chart of this match

a) Worm Wicket chartCSK vs DC – 04 Oct 2021

Delhi Capitals finally win the match

b) Win Probability Logistic Regression (side-by-side) – CSK vs DC – 4 Oct 2021

Plotting how win probability changes over the course of the match using Logistic Regression Model

In this match Delhi Capitals won. The batting scorecard of Delhi Capitals

c) Batting Scorecard of Delhi Capitals – CSK vs DC – 4 Oct 2021

d) Win Probability Logistic Regression (Overlapping) – CSK vs DC – 4 Oct 2021

The Win Probability LR (overlapping) shows the probability function of both teams superimposed over one another. The plot includes when a batsman came into to play and when he got out. This is for both teams. This looks a little noisy, but there is a way to selectively display the change in Win Probability for each team. This can be done , by clicking the 3 arrows (orange or blue) from top to bottom. First double-click the team CSK or DC, then click the next 2 items (blue,red or black,grey) Sorry the legends don’t match the colors! 😦

Below we can see how the win probability changed for Delhi Capitals during their innings, as batsmen came into to play. See below

e) Batsman Win Probability contribution:DC – CSK vs DC – 4 Oct 2021

Computing the individual batsman’s Win Contribution and plotting we have. Hetmeyer has a higher Win Probability contribution than Shikhar Dhawan depsite scoring fewer runs

f) Bowler’s Win Probability contribution :CSK – CSK vs DC – 4 Oct 2021

We can also check the Win Probability of the bowlers. So for e.g the CSK bowlers and which bowlers had the most impact. Moeen Ali has the least impact in this match

B) Intl. T20 (men) Australia vs India – 25 Sep 2022

a) Worm wicket chart – Australia vs India – 25 Sep 2022

This was another close match in which India won with the penultimate ball

b) Win Probability based on Deep Learning model (side-by-side) – Australia vs India – 25 Sep 2022

c) Win Probability based on Deep Learning model (overlapping) – Australia vs India – 25 Sep 2022

The plot below shows how the Win Probability of the teams varied across the 20 overs. The 2 Win Probability distributions are superimposed over each other

d) Batsman Win Probability Contribution : IndiaAustralia vs India – 25 Sep 2022

Selectively choosing the India Win Probability plot by double-clicking legend ‘India’ on the right , followed by single click of black, grey legend we have

We see that Kohli, Suryakumar Yadav have good contribution to the Win Probability

e) Plotting the Runs vs Strike Rate:India – Australia vs India – 25 Sep 2022

f) Batsman’s Win Probability Contribution- Australia vs India – 25 Sep 2022

Finally plotting the Batsman’s Win Probability Contribution

Interestingly, Kohli has a greater Win Probability Contribution than SKY, though SKY scored more runs at a better strike rate. As mentioned above, the Win Probability is context dependent and also depends on past performances of the player (batsman, bowler)

Finally let us look at

C) India vs England Intll T20 Women (11 July 2021)

a) Worm wicket chart – India vs England Intl. T20 Women (11 July 2021)

India won this T20 match by 8 runs

b) Win Probability using the Logistic Regression Model – India vs England Intl. T20 Women (11 July 2021)

c) Win Probability with the DL model – India vs England Intl. T20 Women (11 July 2021)

d) Bowler Win Probability Contribution with the LR model India vs England Intl. T20 Women (11 July 2021)

e) Bowler Win Contribution with the DL model India vs England Intl. T20 Women (11 July 2021)

Go ahead and try out the latest version of GooglyPlusPlus

Also see my other posts

  1. Deep Learning from first principles in Python, R and Octave – Part 8
  2. A method to crowd source pothole marking on (Indian) roads
  3. Big Data 7: yorkr waltzes with Apache NiFi
  4. Practical Machine Learning with R and Python – Part 6
  5. Introducing cricpy:A python package to analyze performances of cricketers
  6. Revisiting World Bank data analysis with WDI and gVisMotionChart
  7. Literacy in India – A deepR dive
  8. Cricketr learns new tricks : Performs fine-grained analysis of players
  9. Presentation on “Intelligent Networks, CAMEL protocol, services & applications”
  10. Adventures in LogParser, HTA and charts

To see all posts click Index of posts

T20 Win Probability using CTGANs, synthetic data

This should be my last post on computing T20 Win Probability. In this post I compute Win Probability using Augmented Data with the help of Conditional Tabular Generative Adversarial Networks (CTGANs).

A.Introduction

I started the computation of T20 match Win Probability in my earlier post

a) ‘Computing Win-Probability of T20 matches‘ where I used

  • vanilla Logistic Regression to get an accuracy of 0.67,
  • Random Forest with Tidy models gave me an accuracy of 0.737
  • Deep Learning with Keras also with 0.73.

This was done without player embeddings

b) Next I used player embeddings for batsmen and bowlers in my post Boosting Win Probability accuracy with player embeddings , and my accuracies improved significantly

  • glmnet : accuracy – 0.728 and roc_auc – 0.81
  • random forest : accuracy – 0.927 and roc_auc – 0.98
  • mlp-dnn :accuracy – 0.762 and roc_auc – 0.854

c) Third I tried using Deep Learning with Keras using player embeddings

  • DL network gave an accuracy of 0.8639

This was lightweight and could be easily deployed in my Shiny GooglyPlusPlus app as opposed to the Tidymodel’s Random Forest, which was bulky and slow.

d) Finally I decided to try and improve the accuracy of my Deep Learning Model using Synthetic data. Towards this end, my explorations led me to Conditional Tabular Generative Adversarial Networks (CTGANs). CTGAN are GAN networks that can be used with Tabular data as GAN models are not useful with tabular data. However, the best performance I got for

  • DL Keras Model + Synthetic data : accuracy =0.77

The poorer accuracy was because CTGAN requires enormous computing power (GPUs) and RAM. The free version of Colab, Kaggle kept crashing when I tried with even 0.1 % of my 1.2 million dataset size. Finally, I tried with just 0.05% and was able to generate synthetic data. Most likely, it is the small sample size and the smaller number of epochs could be the reason for the poor result. In any case, it was worth trying and this approach would possibly work with sufficient computing resources.

B.Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GANs) was the brain child of Ian Goodfellow who demonstrated it in 2014. GANs are capable of generating synthetic text, tables, images, videos using available data. In Adversarial nets framework, the generative model is pitted against an adversary: a
discriminative model that learns to determine whether a sample is from the model distribution or the
data distribution.

GANs have 2 Deep Neural Networks , the Generator and Discriminator which compete against other

  • The Generator (Counterfeiter) takes random noise as input and generates fake images, tables, text. The generator learns to generate plausible data. The generated instances become negative training examples for the discriminator.
  • The Discriminator (Police) which tries to distinguish between the real and fake images, text. The discriminator learns to distinguish the generator’s fake data from real data. The discriminator penalises the generator for producing implausible results.

A pictorial representation of the GAN model can be shown below

Theoretically best performance of GANs are supposed to happen when the network reaches the ‘Nash equilibrium‘, i.e. when the Generator produces near fake images and the Discriminator’s loss is f ~0.5 i.e. the discriminator is unable to distinguish between real and fake images.

Note: Though I have mentioned T20 data in the above GAN model, the T20 tabular data is actually used in CTGAN which is slightly different from the above. See Reference 2) below.

C. Conditional Tabular Generative Adversial Networks (CTGANs)

“Modeling the probability distribution of rows in tabular data and generating realistic synthetic data is a non-trivial task. Tabular data usually contains a mix of discrete and continuous columns. Continuous columns may have multiple modes whereas discrete columns are sometimes imbalanced making the modeling difficult.” CTGANs handle these challenges.

I came upon CTGAN after spending some time exploring GANs via blogs, videos etc. For building the model I use real T20 match data. However, CTGAN requires immense raw computing power and a lot of RAM. My initial attempts on Colab, my Mac (12 core, 32GB RAM), took forever before eventually crashing, I switched to Kaggle and used GPUs. Still I was only able to use only a miniscule part of my T20 dataset. My match data has 1.2 million rows, hoanything > 0.05% resulted in Kaggle crashing. Since I was able to use only a fraction, I executed the CTGAN model over several iterations, each iteration with a random 0.05% sample of the dataset. At the end of each iterations I also generate synthetic dataset. Over 12 iterations, I generate close 360K of ‘synthetic‘ T20 match data.

I then augment the 1.2 million rows of ‘real‘ T20 match data with the generated ‘synthetic T20 match data to run my Deep Learning model

D. Executing the CTGAN model

a. Read the real T20 match data

!pip install ctgan
import pandas as pd
import ctgan
from ctgan import CTGAN
from numpy.random import seed

# Read the T20 match data
df = pd.read_csv('/kaggle/input/cricket1/t20.csv')

# Randomly sample 0.05% of the dataset. Note larger datasets cause the algo to crash
train_dataset = df.sample(frac=0.05)

# Print the real T20 match data
print(train_dataset.head(10))
print(train_dataset.shape)

             batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs   runRate  \
363695         3333        432      134             119   153  1.285714   
1082839        3881       1180      218              30    93  3.100000   
595799         2366        683      187              65   120  1.846154   
737614         4490       1381      148              87   144  1.655172   
410202          934       1003       19             106    35  1.842105   
525627          921       1711      251               1     8  8.000000   
657669         4718       1602      130             115   145  1.260870   
666461         4309       1989       44              87    38  0.863636   
651229         3336        754       30              92    36  1.200000   
709892         3048        421       97              28   119  1.226804   

            numWickets  runsMomentum  perfIndex  isWinner  
363695            0      0.092437  18.333333         1  
1082839           5      0.200000   4.736842         0  
595799            4      0.107692   9.566667         0  
737614            1      0.114943   9.130435         1  
410202            0      0.103774  20.263158         0  
525627            8      3.000000   3.837209         0  
657669            0      0.095652  19.555556         0  
666461            0      0.126437   9.500000         0  
651229            0      0.119565  13.200000         0  
709892            3      0.285714   9.814433         1  
(59956, 10)

b. Run CTGAN model on the real T20 data

import pandas as pd
import ctgan
from ctgan import CTGAN
from numpy.random import seed
from pickle import TRUE

df = pd.read_csv('/kaggle/input/cricket1/t20.csv')

#Specify the categorical features. batsmanIdx & bowlerIdx are player embeddings
categorical_features = ['batsmanIdx','bowlerIdx']

# Create a empty dataframe for synthetic data
df1 = pd.DataFrame()

# Loop for 12 iterations. Minimize generator & discriminator loss
for i in range(12):
    print(i)
    train_dataset = df.sample(frac=0.05)
    seed(33)

    ctgan = CTGAN(epochs=20,verbose=True,generator_lr=.001,discriminator_lr=.001,batch_size=1000)
    ctgan.fit(train_dataset, categorical_features)

    # Generate synthetic data
    samples = ctgan.sample(30000)

   # Concatenate the synthetic data after each iteration
    df1 = pd.concat([df1,samples])
    print(samples.head())
    print(df1.shape)

# Output the synthetic data to file
df1.to_csv("output1.csv",index=False)

0
Epoch 1, Loss G:  8.3825,Loss D: -0.6159
Epoch 2, Loss G:  3.5117,Loss D: -0.3016
Epoch 3, Loss G:  2.1619,Loss D: -0.5713
Epoch 4, Loss G:  0.9847,Loss D:  0.1010
Epoch 5, Loss G:  0.6198,Loss D:  0.0789
Epoch 6, Loss G:  0.1710,Loss D:  0.0959
Epoch 7, Loss G:  0.3236,Loss D: -0.1554
Epoch 8, Loss G:  0.2317,Loss D: -0.0765
Epoch 9, Loss G: -0.0127,Loss D:  0.0275
Epoch 10, Loss G:  0.1477,Loss D: -0.0353
Epoch 11, Loss G:  0.0997,Loss D: -0.0129
Epoch 12, Loss G:  0.0066,Loss D: -0.0486
Epoch 13, Loss G:  0.0351,Loss D: -0.0805
Epoch 14, Loss G: -0.1399,Loss D: -0.0021
Epoch 15, Loss G: -0.1503,Loss D: -0.0518
Epoch 16, Loss G: -0.2306,Loss D: -0.0234
Epoch 17, Loss G: -0.2986,Loss D:  0.0469
Epoch 18, Loss G: -0.1941,Loss D: -0.0560
Epoch 19, Loss G: -0.3794,Loss D:  0.0000
Epoch 20, Loss G: -0.2763,Loss D:  0.0368
   batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs   runRate  numWickets  \
0         906        224        8              75    81  1.955153           4   
1        4159        433       17              31   126  1.799280           9   
2         229        351      192              66    82  1.608527           5   
3        1926        962       63               0   117  1.658105           0   
4         286        431      128               1    36  1.605079           0   

   runsMomentum  perfIndex  isWinner  
0      0.146670   6.937595         1  
1      0.160534  10.904346         1  
2      0.516010  11.698128         1  
3      0.380986  11.914613         0  
4      0.112255   5.392120         0  
(30000, 10)
1
Epoch 1, Loss G:  7.9977,Loss D: -0.3592
Epoch 2, Loss G:  3.7418,Loss D: -0.3371
Epoch 3, Loss G:  1.6685,Loss D: -0.3211
Epoch 4, Loss G:  1.0539,Loss D: -0.3495
Epoch 5, Loss G:  0.4664,Loss D: -0.0907
Epoch 6, Loss G:  0.4004,Loss D: -0.1208
Epoch 7, Loss G:  0.3250,Loss D: -0.1482
Epoch 8, Loss G:  0.1753,Loss D:  0.0169
Epoch 9, Loss G:  0.1382,Loss D:  0.0661
Epoch 10, Loss G:  0.1509,Loss D: -0.1023
Epoch 11, Loss G: -0.0235,Loss D:  0.0210
Epoch 12, Loss G: -0.1636,Loss D: -0.0124
Epoch 13, Loss G: -0.3370,Loss D: -0.0185
Epoch 14, Loss G: -0.3054,Loss D: -0.0085
Epoch 15, Loss G: -0.5142,Loss D:  0.0121
Epoch 16, Loss G: -0.3813,Loss D: -0.0921
Epoch 17, Loss G: -0.5838,Loss D:  0.0210
Epoch 18, Loss G: -0.4033,Loss D: -0.0181
Epoch 19, Loss G: -0.5711,Loss D:  0.0269
Epoch 20, Loss G: -0.4828,Loss D: -0.0830
   batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs   runRate  numWickets  \
0        2202        265      223              39    13  0.868927           0   
1        3641        856       35              59    26  2.236160           6   
2         676       2903      218              93    16  0.460693           1   
3        3482       3459       44             117   102  0.851471           8   
4        3046       3076       59               5    84  1.016824           2   

   runsMomentum  perfIndex  isWinner  
0      0.138586   4.733462         0  
1      0.124453   5.146831         1  
2      0.273168  10.106869         0  
3      0.129520   5.361127         0  
4      1.083525  25.677574         1  
(60000, 10)
...
...
11
Epoch 1, Loss G:  8.8362,Loss D: -0.7111
Epoch 2, Loss G:  4.1322,Loss D: -0.8468
Epoch 3, Loss G:  1.2782,Loss D:  0.1245
Epoch 4, Loss G:  1.1135,Loss D: -0.3588
Epoch 5, Loss G:  0.6033,Loss D: -0.1255
Epoch 6, Loss G:  0.6912,Loss D: -0.1906
Epoch 7, Loss G:  0.3340,Loss D: -0.1048
Epoch 8, Loss G:  0.3515,Loss D: -0.0730
Epoch 9, Loss G:  0.1702,Loss D:  0.0237
Epoch 10, Loss G:  0.1064,Loss D:  0.0632
Epoch 11, Loss G:  0.0884,Loss D: -0.0005
Epoch 12, Loss G:  0.0556,Loss D: -0.0607
Epoch 13, Loss G: -0.0917,Loss D: -0.0223
Epoch 14, Loss G: -0.1492,Loss D:  0.0258
Epoch 15, Loss G: -0.0986,Loss D: -0.0112
Epoch 16, Loss G: -0.1428,Loss D: -0.0060
Epoch 17, Loss G: -0.2225,Loss D: -0.0263
Epoch 18, Loss G: -0.2255,Loss D: -0.0328
Epoch 19, Loss G: -0.3482,Loss D:  0.0277
Epoch 20, Loss G: -0.2667,Loss D: -0.0721
   batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs   runRate  numWickets  \
0         367       1447      129              27    30  1.242120           2   
1        2481       1528      221               4    10  1.344024           2   
2        1034       3116      132              87   153  1.142750           3   
3        1201       2868      151              60   136  1.091638           1   
4        4327       3291      108              89    22  0.842775           2   

   runsMomentum  perfIndex  isWinner  
0      1.978739   6.393691         1  
1      0.539650   6.783990         0  
2      0.107156  12.154197         0  
3      3.193574  11.992059         0  
4      0.127507  12.210876         0  
(360000, 10)

E. Sample of the Synthetic data

synthetic_data = ctgan.sample(20000)
print(synthetic_data.head(100))

    batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs    runRate  \
0         1073       3059       72              72   149   2.230236   
1         3769       1443      106               7   137   0.881409   
2          448       3048      166               6   220   1.092504   
3         2969       1244      103              82   207  12.314862   
4          180       1372      125             111    14   1.310051   
..         ...        ...      ...             ...   ...        ...   
95        1521       1040      153               6   166   1.097363   
96        2366         62       25             114   119   0.910642   
97        3506       1736      100             118   140   1.640921   
98        3343       2347       47              54    50   0.696462   
99        1957       2888      136              27   153   1.315565   

    numWickets  runsMomentum  perfIndex  isWinner  
0            0      0.111707  17.466925         0  
1            1      0.130352  14.274113         0  
2            1      0.173541  11.076731         1  
3            1      0.218977   6.239951         0  
4            4      2.829380   9.183323         1  
..         ...           ...        ...       ...  
95           0      0.223437   7.011180         0  
96           1      0.451371  16.908120         1  
97           5      0.156936   9.217205         0  
98           6      0.124536   6.273091         0  
99           1      0.249329  14.221554         0  

[100 rows x 10 columns]

F. Evaluating the synthetic T20 match data

Here the quality of the synthetic data set is evaluated.

a) Statistical evaluation

  • Read the real T20 match data
  • Read the generated T20 synthetic match data
import pandas as pd

# Read the T20 match and synthetic match data
df = pd.read_csv('/kaggle/input/cricket1/t20.csv').  #1.2 million rows
synthetic=pd.read_csv('/kaggle/input/synthetic/synthetic.csv')   #300K

# Randomly sample 1000 rows, and generate stats
df1=df.sample(n=1000)
real=df1.describe()
realData_stats=real.transpose
print(realData_stats)

synthetic1=synthetic.sample(n=1000)
synthetic=synthetic1.describe()
syntheticData_stats=synthetic.transpose
syntheticData_stats

a) Stats of real T20 match data

<bound method DataFrame.transpose of         batsmanIdx    bowlerIdx      ballNum  ballsRemaining         runs  \
count  1000.000000  1000.000000  1000.000000     1000.000000  1000.000000   
mean   2323.940000  1776.481000   118.165000       59.236000    77.649000   
std    1329.703046  1011.470703    70.564291       35.312934    49.098763   
min       8.000000    13.000000     1.000000        1.000000    -2.000000   
25%    1134.750000   850.000000    58.000000       28.750000    39.000000   
50%    2265.000000  1781.500000   117.000000       59.000000    72.000000   
75%    3510.000000  2662.250000   178.000000       89.000000   111.000000   
max    4738.000000  3481.000000   265.000000      127.000000   246.000000   

           runRate   numWickets  runsMomentum    perfIndex     isWinner  
count  1000.000000  1000.000000   1000.000000  1000.000000  1000.000000  
mean      1.734979     2.614000      0.310568     9.580386     0.499000  
std       5.698104     2.267189      0.686171     4.530856     0.500249  
min      -2.000000     0.000000      0.071429     0.000000     0.000000  
25%       1.009063     1.000000      0.105769     6.666667     0.000000  
50%       1.272727     2.000000      0.141026     9.236842     0.000000  
75%       1.546891     4.000000      0.250000    12.146735     1.000000  
max     166.000000    10.000000     10.000000    30.800000     1.000000

b) Stats of Synthetic T20 match data

     
           batsmanIdx    bowlerIdx      ballNum  ballsRemaining         runs  \
count  1000.000000  1000.000000  1000.000000     1000.000000  1000.000000   
mean   2304.135000  1760.776000   116.081000       50.102000    74.357000   
std    1342.348684  1003.496003    72.019228       35.795236    48.103446   
min       2.000000    15.000000    -4.000000       -2.000000    -1.000000   
25%    1093.000000   881.000000    46.000000       18.000000    30.000000   
50%    2219.500000  1763.500000   116.000000       45.000000    75.000000   
75%    3496.500000  2644.750000   180.250000       77.000000   112.000000   
max    4718.000000  3481.000000   253.000000      124.000000   222.000000   

           runRate   numWickets  runsMomentum    perfIndex     isWinner  
count  1000.000000  1000.000000   1000.000000  1000.000000  1000.000000  
mean      1.637225     3.096000      0.336540     9.278073     0.507000  
std       1.691060     2.640408      0.502346     4.727677     0.500201  
min      -4.388339     0.000000      0.083351    -0.902991     0.000000  
25%       1.077789     1.000000      0.115770     5.731931     0.000000  
50%       1.369655     2.000000      0.163085     9.104328     1.000000  
75%       1.660477     5.000000      0.311586    12.619318     1.000000  
max      23.757001    10.000000      4.630908    29.829497     1.000000

c) Plotting the Generator and Discriminator loss

import pandas as pd

# CTGAN prints out a new line for each epoch
epochs_output = str(output).split('\n')

# CTGAN separates the values with commas
raw_values = [line.split(',') for line in epochs_output]
loss_values = pd.DataFrame(raw_values)[:-1] # convert to df and delete last row (empty)

# Rename columns
loss_values.columns = ['Epoch', 'Generator Loss', 'Discriminator Loss']

# Extract the numbers from each column 
loss_values['Epoch'] = loss_values['Epoch'].str.extract('(\d+)').astype(int)
loss_values['Generator Loss'] = loss_values['Generator Loss'].str.extract('([-+]?\d*\.\d+|\d+)').astype(float)
loss_values['Discriminator Loss'] = loss_values['Discriminator Loss'].str.extract('([-+]?\d*\.\d+|\d+)').astype(float)

# the result is a row for each epoch that contains the generator and discriminator loss
loss_values.head()

	Epoch	Generator Loss	Discriminator Loss
0	1	8.0158	-0.3840
1	2	4.6748	-0.9589
2	3	1.1503	-0.0066
3	4	1.5593	-0.8148
4	5	0.6734	-0.1425
5	6	0.5342	-0.2202
6	7	0.4539	-0.1462
7	8	0.2907	-0.0155
8	9	0.2399	0.0172
9	10	0.1520	-0.0236
import plotly.graph_objects as go

# Plot loss function
fig = go.Figure(data=[go.Scatter(x=loss_values['Epoch'], y=loss_values['Generator Loss'], name='Generator Loss'),
                      go.Scatter(x=loss_values['Epoch'], y=loss_values['Discriminator Loss'], name='Discriminator Loss')])


# Update the layout for best viewing
fig.update_layout(template='plotly_white',
                    legend_orientation="h",
                    legend=dict(x=0, y=1.1))

title = 'CTGAN loss function for T20 dataset - ' 
fig.update_layout(title=title, xaxis_title='Epoch', yaxis_title='Loss')
fig.show()

G. Qualitative evaluation of Synthetic data

a) Quality of continuous columns in synthetic data

KSComplement -This metric computes the similarity of a real column vs. a synthetic column in terms of the column shapes.The KSComplement uses the Kolmogorov-Smirnov statistic. Closer to 1.0 is good and 0 is worst

from sdmetrics.single_column import KSComplement
numerical_columns=['ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']
total_score = 0
for column_name in numerical_columns:
    column_score = KSComplement.compute(df[column_name], synthetic[column_name])
    total_score += column_score
    print('Column:', column_name, ', Score: ', column_score)

print('\nAverage: ', total_score/len(numerical_columns))

Column: ballNum , Score:  0.9502754283367316
Column: ballsRemaining , Score:  0.8770284103276166
Column: runs , Score:  0.9136464248633367
Column: runRate , Score:  0.9183841670732166
Column: numWickets , Score:  0.9016209114638712
Column: runsMomentum , Score:  0.8773491702213716
Column: perfIndex , Score:  0.9173808852778924

Average:  0.9079550567948624

b) Quality of categorical columns

This statistic measures the quality of generated categorical columns. 1 is best and 0 is worst

categorical_columns=['batsmanIdx','bowlerIdx']
from sdmetrics.single_column import TVComplement

total_score = 0
for column_name in categorical_columns:
    column_score = TVComplement.compute(df[column_name], synthetic[column_name])
    total_score += column_score
    print('Column:', column_name, ', Score: ', column_score)

print('\nAverage: ', total_score/len(categorical_columns))

Column: batsmanIdx , Score:  0.8436263499539245
Column: bowlerIdx , Score:  0.7356177407921669

Average:  0.7896220453730457

The performance is decent but not excellent. I was unable to execute more epochs as it it required larger than the memory allowed

c) Correlation similarity

This metric measures the correlation between a pair of numerical columns and computes the similarity between the real and synthetic data – it compares the trends of 2D distributions. Best 1.0 and 0.0 is worst

import itertools
from sdmetrics.column_pairs import CorrelationSimilarity

total_score = 0
total_pairs = 0
for pair in itertools.combinations(numerical_columns,2):
    col_A, col_B = pair
    score = CorrelationSimilarity.compute(df[[col_A, col_B]], synthetic[[col_A, col_B]])
    print('Columns:', pair, ' Score:', score)
    total_score += score
    total_pairs += 1

print('\nAverage: ', total_score/total_pairs)

Columns: ('ballNum', 'ballsRemaining')  Score: 0.7153942317384889
Columns: ('ballNum', 'runs')  Score: 0.8838043045134777
Columns: ('ballNum', 'runRate')  Score: 0.8710243133637056
Columns: ('ballNum', 'numWickets')  Score: 0.7978515509750435
Columns: ('ballNum', 'runsMomentum')  Score: 0.8956281260834316
Columns: ('ballNum', 'perfIndex')  Score: 0.9275145840528048
Columns: ('ballsRemaining', 'runs')  Score: 0.9566928975064546
Columns: ('ballsRemaining', 'runRate')  Score: 0.9127313819127167
Columns: ('ballsRemaining', 'numWickets')  Score: 0.6770737279315224
Columns: ('ballsRemaining', 'runsMomentum')  Score: 0.7939260278412358
Columns: ('ballsRemaining', 'perfIndex')  Score: 0.8694582252638351
Columns: ('runs', 'runRate')  Score: 0.999593795992159
Columns: ('runs', 'numWickets')  Score: 0.9510731832916608
Columns: ('runs', 'runsMomentum')  Score: 0.9956131422133428
Columns: ('runs', 'perfIndex')  Score: 0.9742931845536701
Columns: ('runRate', 'numWickets')  Score: 0.8859830711832263
Columns: ('runRate', 'runsMomentum')  Score: 0.9174744874779561
Columns: ('runRate', 'perfIndex')  Score: 0.9491100087911353
Columns: ('numWickets', 'runsMomentum')  Score: 0.8989709776329797
Columns: ('numWickets', 'perfIndex')  Score: 0.7178946968801441
Columns: ('runsMomentum', 'perfIndex')  Score: 0.9744441623018661

Average:  0.8840738134048025

d) Category coverage

This metric measures whether a synthetic column covers all the possible categories that are present in a real column. 1.0 is best , 0 is worst

from sdmetrics.single_column import CategoryCoverage

total_score = 0
for column_name in categorical_columns:
    column_score = CategoryCoverage.compute(df[column_name], synthetic[column_name])
    total_score += column_score
    print('Column:', column_name, ', Score: ', column_score)

print('\nAverage: ', total_score/len(categorical_columns))

Column: batsmanIdx , Score:  0.9533951919021509
Column: bowlerIdx , Score:  0.9913966160022942

Average:  0.9723959039522225

H. Augmenting the T20 match data set

In this final part I augment my T20 match data set with the generated synthetic T20 data set.

import pandas as pd
from numpy import savetxt
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np


from keras.layers import Input, Embedding, Flatten, Dense, Reshape, Concatenate, Dropout
from keras.models import Model
import matplotlib.pyplot as plt

# Read real and synthetic data
df = pd.read_csv('/kaggle/input/cricket1/t20.csv')
synthetic=pd.read_csv('/kaggle/input/synthetic/synthetic.csv')

# Augment the data. Concatenate real & synthetic data
df1=pd.concat([df,synthetic])

# Create training and test samples
print("Shape of dataframe=",df1.shape)
train_dataset = df1.sample(frac=0.8,random_state=0)
test_dataset = df1.drop(train_dataset.index)
train_dataset1 = train_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_dataset1 = test_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
train_dataset1
train_labels = train_dataset.pop('isWinner')
test_labels = test_dataset.pop('isWinner')
print(train_dataset1.shape)

a=train_dataset1.describe()
stats=a.transpose
print(a)

a) Create A Deep Learning Model in Keras

from numpy.random import seed
seed(33)
tf.random.set_seed(432)
# create input layers for each of the predictors
batsmanIdx_input = Input(shape=(1,), name='batsmanIdx')
bowlerIdx_input = Input(shape=(1,), name='bowlerIdx')
ballNum_input = Input(shape=(1,), name='ballNum')
ballsRemaining_input = Input(shape=(1,), name='ballsRemaining')
runs_input = Input(shape=(1,), name='runs')
runRate_input = Input(shape=(1,), name='runRate')
numWickets_input = Input(shape=(1,), name='numWickets')
runsMomentum_input = Input(shape=(1,), name='runsMomentum')
perfIndex_input = Input(shape=(1,), name='perfIndex')

no_of_unique_batman=len(df1["batsmanIdx"].unique()) 
print(no_of_unique_batman)
no_of_unique_bowler=len(df1["bowlerIdx"].unique()) 
print(no_of_unique_bowler)

embedding_size_bat = no_of_unique_batman ** (1/4)
print(embedding_size_bat)
embedding_size_bwl = no_of_unique_bowler ** (1/4)
print(embedding_size_bwl)

# create embedding layer for the categorical predictor
batsmanIdx_embedding = Embedding(input_dim=no_of_unique_batman+1, output_dim=16,input_length=1)(batsmanIdx_input)
print(batsmanIdx_embedding)
batsmanIdx_flatten = Flatten()(batsmanIdx_embedding)
print(batsmanIdx_flatten)
bowlerIdx_embedding = Embedding(input_dim=no_of_unique_bowler+1, output_dim=16,input_length=1)(bowlerIdx_input)
bowlerIdx_flatten = Flatten()(bowlerIdx_embedding)
print(bowlerIdx_flatten)
# concatenate all the predictors
x = keras.layers.concatenate([batsmanIdx_flatten,bowlerIdx_flatten, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input])
print(x.shape)
# add hidden layers
x = Dense(96, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(16, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(8, activation='relu')(x)
x = Dropout(0.1)(x)
# add output layer
output = Dense(1, activation='sigmoid', name='output')(x)
print(output.shape)
# create model
model = Model(inputs=[batsmanIdx_input,bowlerIdx_input, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input], outputs=output)
model.summary()
# compile model
#optimizer=keras.optimizers.Adam(learning_rate=.01, beta_1=0.1, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
#optimizer=keras.optimizers.RMSprop(learning_rate=0.001, rho=0.2, momentum=0.2, epsilon=1e-07)
#optimizer=keras.optimizers.SGD(learning_rate=.01,momentum=0.1) #- Works without dropout
#optimizer = tf.keras.optimizers.RMSprop(0.01)
#optimizer=keras.optimizers.SGD(learning_rate=.01,momentum=0.1)
#optimizer=keras.optimizers.RMSprop(learning_rate=.005, rho=0.1, momentum=0, epsilon=1e-07)

optimizer=keras.optimizers.Adam(learning_rate=.015, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=True)

model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])

# train the model
history=model.fit([train_dataset1['batsmanIdx'],train_dataset1['bowlerIdx'],train_dataset1['ballNum'],train_dataset1['ballsRemaining'],train_dataset1['runs'],
           train_dataset1['runRate'],train_dataset1['numWickets'],train_dataset1['runsMomentum'],train_dataset1['perfIndex']], train_labels, epochs=20, batch_size=1024,
          validation_data = ([test_dataset1['batsmanIdx'],test_dataset1['bowlerIdx'],test_dataset1['ballNum'],test_dataset1['ballsRemaining'],test_dataset1['runs'],
           test_dataset1['runRate'],test_dataset1['numWickets'],test_dataset1['runsMomentum'],test_dataset1['perfIndex']],test_labels), verbose=1)

plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "test"], loc="upper left")
plt.show()

==================================================================================================
Total params: 144,497
Trainable params: 144,497
Non-trainable params: 0
__________________________________________________________________________________________________
Epoch 1/20
1219/1219 [==============================] - 15s 11ms/step - loss: 0.6285 - accuracy: 0.6372 - val_loss: 0.5164 - val_accuracy: 0.7606
Epoch 2/20
1219/1219 [==============================] - 14s 11ms/step - loss: 0.5594 - accuracy: 0.7121 - val_loss: 0.4920 - val_accuracy: 0.7663
Epoch 3/20
1219/1219 [==============================] - 14s 12ms/step - loss: 0.5338 - accuracy: 0.7244 - val_loss: 0.4541 - val_accuracy: 0.7878
Epoch 4/20
1219/1219 [==============================] - 14s 11ms/step - loss: 0.5176 - accuracy: 0.7317 - val_loss: 0.4226 - val_accuracy: 0.7933
Epoch 5/20
1219/1219 [==============================] - 13s 11ms/step - loss: 0.4966 - accuracy: 0.7420 - val_loss: 0.4547 - val_accuracy: 0.7
...
...
poch 18/20
1219/1219 [==============================] - 14s 11ms/step - loss: 0.4300 - accuracy: 0.7747 - val_loss: 0.3536 - val_accuracy: 0.8288
Epoch 19/20
1219/1219 [==============================] - 14s 12ms/step - loss: 0.4269 - accuracy: 0.7766 - val_loss: 0.3565 - val_accuracy: 0.8302
Epoch 20/20
1219/1219 [==============================] - 14s 11ms/step - loss: 0.4259 - accuracy: 0.7775 - val_loss: 0.3498 - val_accuracy: 0.831

As can be seen the accuracy with augmented dataset is around 0.77, while without it I was getting 0.867 with just the real data. This degradation is probably due to the folllowing reasons

  • Only a fraction of the dataset was used for training. This was not representative of the data distribution for CTGAN to correctly synthesise data
  • The number of epochs had to be kept low to prevent Kaggle/Colab from crashing

I. Conclusion

This post shows how we can generate synthetic T20 match data to augment real T20 match data. Assuming we have sufficient processing power we should be able to generate synthetic data for augmenting our data set. This should improve the accuracy of the Win Probabily Deep Learning model.

References

  1. Generative Adversarial Networks – Ian Goodfellow et al.
  2. Modeling Tabular data using Conditional GAN
  3. Introduction to GAN
  4. Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast
  5. CTGAN
  6. Tabular Synthetic Data Generation using CTGAN
  7. CTGAN Model
  8. Interpreting the Progress of CTGAN
  9. CTGAN metrics

Also see

  1. Using embeddings, collaborative filtering with Deep Learning to analyse T20 players
  2. Using Reinforcement Learning to solve Gridworld
  3. Deep Learning from first principles in Python, R and Octave – Part 4
  4. Practical Machine Learning with R and Python – Part 5
  5. Cricketr adds team analytics to its repertoire!!!
  6. yorkpy takes a hat-trick, bowls out Intl. T20s, BBL and Natwest T20!!!
  7. Deconstructing Convolutional Neural Networks with Tensorflow and Keras
  8. My TEDx talk on the “Internet of Things”
  9. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
  10. The Anomaly

To see all posts click Index of posts

GooglyPlusPlus: Win Probability using Deep Learning and player embeddings

In my last post ‘GooglyPlusPlus now with Win Probability Analysis for all T20 matches‘ I had discussed the performance of my ML models, created with and without player embeddings, in computing the Win Probability of T20 matches. With batsman & bowler embeddings I got much better performance than without the embeddings

  • glmnet – Accuracy – 0.73
  • Random Forest (RF) – Accuracy – 0.92

While the Random Forest gave excellent accuracy, it was bulky and also took an unusually long time to predict the Win Probability of a single T20 match. The above 2 ML models were built using R’s Tidymodels. glmnet was fast, but I wanted to see if I could create a ML model that was better, lighter and faster. I had initially tried to use Tensorflow, Keras in Python but then abandoned it, since I did not know how to port the Deep Learning model to R and use in my app GooglyPlusPlus.

But later, since I was stuck with a bulky Random Forest model, I decided to again explore options for saving the Keras Deep Learning model and loading it in R. I found out that saving the model as .h5, we can load it in R and use it for predictions. Hence, I rebuilt a Deep Learning model using Keras, Python with player embeddings and I got excellent performance. The DL model was light and had an accuracy 0.8639 with an ROC_AUC of 0.964 which was great!

GooglyPlusPlus uses data from Cricsheet and is based on my R package yorkr

You can try out this latest version of GooglyPlusPlus at gpp2023-1

Here are the steps

A. Build a Keras Deep Learning model

a. Import necessary packages

import pandas as pd
import numpy as np
from zipfile import ZipFile
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from pathlib import Path
import matplotlib.pyplot as plt

b, Upload the data of all 9 T20 leagues (BBL, CPL, IPL, T20 (men) , T20(women), NTB, CPL, SSM, WBB)

# Read all T20 leagues 
df1=pd.read_csv('t20.csv')
print("Shape of dataframe=",df1.shape)

# Create training and test data set
train_dataset = df1.sample(frac=0.8,random_state=0)
test_dataset = df1.drop(train_dataset.index)
train_dataset1 = train_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_dataset1 = test_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
train_dataset1

# Set the target data
train_labels = train_dataset.pop('isWinner')
test_labels = test_dataset.pop('isWinner')
train_dataset1

a=train_dataset1.describe()
stats=a.transpose
a

c. Create a Deep Learning ML model using batsman & bowler embeddings

import pandas as pd
import numpy as np
from keras.layers import Input, Embedding, Flatten, Dense
from keras.models import Model
from keras.layers import Input, Embedding, Flatten, Dense, Reshape, Concatenate, Dropout
from keras.models import Model

# Set seed
tf.random.set_seed(432)

# create input layers for each of the predictors
batsmanIdx_input = Input(shape=(1,), name='batsmanIdx')
bowlerIdx_input = Input(shape=(1,), name='bowlerIdx')
ballNum_input = Input(shape=(1,), name='ballNum')
ballsRemaining_input = Input(shape=(1,), name='ballsRemaining')
runs_input = Input(shape=(1,), name='runs')
runRate_input = Input(shape=(1,), name='runRate')
numWickets_input = Input(shape=(1,), name='numWickets')
runsMomentum_input = Input(shape=(1,), name='runsMomentum')
perfIndex_input = Input(shape=(1,), name='perfIndex')

# Set the embedding size as the 4th root of unique batsmen, bowlers
no_of_unique_batman=len(df1["batsmanIdx"].unique()) 
no_of_unique_bowler=len(df1["bowlerIdx"].unique()) 
embedding_size_bat = no_of_unique_batman ** (1/4)
embedding_size_bwl = no_of_unique_bowler ** (1/4)


# create embedding layer for the categorical predictor
batsmanIdx_embedding = Embedding(input_dim=no_of_unique_batman+1, output_dim=16,input_length=1)(batsmanIdx_input)
batsmanIdx_flatten = Flatten()(batsmanIdx_embedding)
bowlerIdx_embedding = Embedding(input_dim=no_of_unique_bowler+1, output_dim=16,input_length=1)(bowlerIdx_input)
bowlerIdx_flatten = Flatten()(bowlerIdx_embedding)

# concatenate all the predictors
x = keras.layers.concatenate([batsmanIdx_flatten,bowlerIdx_flatten, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input])

# add hidden layers
# Use dropouts for regularisation
x = Dense(64, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(16, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(8, activation='relu')(x)
x = Dropout(0.1)(x)

# add output layer
output = Dense(1, activation='sigmoid', name='output')(x)
print(output.shape)

# create a DL model
model = Model(inputs=[batsmanIdx_input,bowlerIdx_input, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input], outputs=output)
model.summary()

# compile model
optimizer=keras.optimizers.Adam(learning_rate=.01, beta_1=0.9, beta_2=0.999, epsilon=1e-07, decay=0.0, amsgrad=True)

model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])

# train the model
history=model.fit([train_dataset1['batsmanIdx'],train_dataset1['bowlerIdx'],train_dataset1['ballNum'],train_dataset1['ballsRemaining'],train_dataset1['runs'],
           train_dataset1['runRate'],train_dataset1['numWickets'],train_dataset1['runsMomentum'],train_dataset1['perfIndex']], train_labels, epochs=40, batch_size=1024,
          validation_data = ([test_dataset1['batsmanIdx'],test_dataset1['bowlerIdx'],test_dataset1['ballNum'],test_dataset1['ballsRemaining'],test_dataset1['runs'],
           test_dataset1['runRate'],test_dataset1['numWickets'],test_dataset1['runsMomentum'],test_dataset1['perfIndex']],test_labels), verbose=1)

plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "test"], loc="upper left")
plt.show()

Model: "model_5"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 batsmanIdx (InputLayer)        [(None, 1)]          0           []                               
                                                                                                  
 bowlerIdx (InputLayer)         [(None, 1)]          0           []                               
                                                                                                  
 embedding_10 (Embedding)       (None, 1, 16)        75888       ['batsmanIdx[0][0]']             
                                                                                                  
 embedding_11 (Embedding)       (None, 1, 16)        55808       ['bowlerIdx[0][0]']              
                                                                                                  
 flatten_10 (Flatten)           (None, 16)           0           ['embedding_10[0][0]']           
                                                                                                  
 flatten_11 (Flatten)           (None, 16)           0           ['embedding_11[0][0]']           
                                                                                                  
 ballNum (InputLayer)           [(None, 1)]          0           []                               
                                                                                                  
 ballsRemaining (InputLayer)    [(None, 1)]          0           []                               
                                                                                                  
 runs (InputLayer)              [(None, 1)]          0           []                               
                                                                                                  
 runRate (InputLayer)           [(None, 1)]          0           []                               
                                                                                                  
 numWickets (InputLayer)        [(None, 1)]          0           []                               
                                                                                                  
 runsMomentum (InputLayer)      [(None, 1)]          0           []                               
                                                                                                  
 perfIndex (InputLayer)         [(None, 1)]          0           []                               
                                                                                                  
 concatenate_5 (Concatenate)    (None, 39)           0           ['flatten_10[0][0]',             
                                                                  'flatten_11[0][0]',             
                                                                  'ballNum[0][0]',                
                                                                  'ballsRemaining[0][0]',         
                                                                  'runs[0][0]',                   
                                                                  'runRate[0][0]',                
                                                                  'numWickets[0][0]',             
                                                                  'runsMomentum[0][0]',           
                                                                  'perfIndex[0][0]']              
                                                                                                  
 dense_19 (Dense)               (None, 64)           2560        ['concatenate_5[0][0]']          
                                                                                                  
 dropout_19 (Dropout)           (None, 64)           0           ['dense_19[0][0]']               
                                                                                                  
 dense_20 (Dense)               (None, 32)           2080        ['dropout_19[0][0]']             
                                                                                                  
 dropout_20 (Dropout)           (None, 32)           0           ['dense_20[0][0]']               
                                                                                                  
 dense_21 (Dense)               (None, 16)           528         ['dropout_20[0][0]']             
                                                                                                  
 dropout_21 (Dropout)           (None, 16)           0           ['dense_21[0][0]']               
                                                                                                  
 dense_22 (Dense)               (None, 8)            136         ['dropout_21[0][0]']             
                                                                                                  
 dropout_22 (Dropout)           (None, 8)            0           ['dense_22[0][0]']               
                                                                                                  
 output (Dense)                 (None, 1)            9           ['dropout_22[0][0]']             
                                                                                                  
==================================================================================================
Total params: 137,009
Trainable params: 137,009
Non-trainable params: 0
__________________________________________________________________________________________________
Epoch 1/40
937/937 [==============================] - 11s 10ms/step - loss: 0.5683 - accuracy: 0.6968 - val_loss: 0.4480 - val_accuracy: 0.7708
Epoch 2/40
937/937 [==============================] - 9s 10ms/step - loss: 0.4477 - accuracy: 0.7721 - val_loss: 0.4305 - val_accuracy: 0.7833
Epoch 3/40
937/937 [==============================] - 9s 10ms/step - loss: 0.4229 - accuracy: 0.7832 - val_loss: 0.3984 - val_accuracy: 0.7936
...
...
937/937 [==============================] - 10s 10ms/step - loss: 0.2909 - accuracy: 0.8627 - val_loss: 0.2943 - val_accuracy: 0.8613
Epoch 38/40
937/937 [==============================] - 10s 10ms/step - loss: 0.2892 - accuracy: 0.8633 - val_loss: 0.2933 - val_accuracy: 0.8621
Epoch 39/40
937/937 [==============================] - 10s 10ms/step - loss: 0.2889 - accuracy: 0.8638 - val_loss: 0.2941 - val_accuracy: 0.8620
Epoch 40/40
937/937 [==============================] - 10s 11ms/step - loss: 0.2886 - accuracy: 0.8639 - val_loss: 0.2929 - val_accuracy: 0.8621

d. Compute and plot the ROC-AUC for the above model

from sklearn.metrics import roc_curve

# Select a random sample set
tf.random.set_seed(59)
train = df1.sample(frac=0.9,random_state=0)
test = df1.drop(train_dataset.index)
test_dataset1 = test[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_labels = test.pop('isWinner')

# Compute the predicted values
y_pred_keras = model.predict([test_dataset1['batsmanIdx'],test_dataset1['bowlerIdx'],test_dataset1['ballNum'],test_dataset1['ballsRemaining'],test_dataset1['runs'],
           test_dataset1['runRate'],test_dataset1['numWickets'],test_dataset1['runsMomentum'],test_dataset1['perfIndex']]).ravel()

# Compute TPR & FPR
fpr_keras, tpr_keras, thresholds_keras = roc_curve(test_labels, y_pred_keras)

fpr_keras, tpr_keras, thresholds_keras = roc_curve(test_labels, y_pred_keras)
from sklearn.metrics import auc

# Plot the Area Under the Curve (AUC)
auc_keras = auc(fpr_keras, tpr_keras)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()

The ROC_AUC for the Deep Learning Model is 0.946 as seen below

e. Save the Keras model for use in Python

from keras.models import Model
model.save("wpDL.h5")

f. Load the model in R using rhdf5 package for use in GooglyPlusPlus

library(rhdf5)
dl_model <- load_model_hdf5('wpDL.h5')

This was a huge success for me to be able to create the Deep Learning model in Python and use it in my Shiny app GooglyPlusPlus. The Deep Learning Keras model is light-weight and extremely fast.

The Deep Learning model has now been integrated into GooglyPlusPlus. Now you can check the Win Probability using both a) glmnet (Logistic Regression with lasso regularisation) b) Keras Deep Learning model with dropouts as regularisation

In addition I have created 2 features based on Win Probability (WP)

i) Win Probability (Side-by-side – Plot(interactive) : With this functionality the 1st and 2nd innings will be side-by-side. When the 1st innings is played by team 1, the Win Probability of team 2 = 100 – WP (team1). Similarly, when the 2nd innings is being played by team 2, the Win Probability of team1 = 100 – WP (team 2)

ii) Win Probability (Overlapping) – Plot (static): With this functionality the Win Probabilities of both team1(1st innings) & team 2 (2nd innings) are displayed overlapping, so that we can see how the probabilities vary ball-by-ball.

Note: Since the same UI is used for all match functions I had to re-use the Plot(interactive) and Plot(static) radio buttons for Win Probability (Side-by-side) and Win Probability(Overlapping) respectively

Here are screenshots using both ML models with both functionality for some random matches

B) ICC T20 Men World Cup – Netherland-South Africa- 2022-11-06

i) Match Worm wicket chart

ii) Win Probability with LR (Side-by-Side- Plot(interactive))

iii) Win Probability LR (Overlapping- Plot(static))

iv) Win Probability Deep Learning (Side-by-side – Plot(interactive)

In the 213th ball of the innings South Africa was slightly ahead of Netherlands. After that they crashed and burned!

v) Win Probability Deep Learning (Overlapping – Plot (static)

It can be seen that in the 94th ball of both innings South Africa was ahead of Netherlands before the eventual slump.

C) Intl. T20 (Women) India – New Zealand – 2020 – 02 – 27

Here is an interesting match between India and New Zealand T20 Women’s teams. NZ successfully chased the India’s total in a wildly swinging fortunes. See the charts below

i) Match Worm Wicket chart

ii) Win Probability with LR (Side-by-side – Plot (interactive)

iii) Win Probability with LR (Overlapping – Plot (static)

iv) Win Probability with DL model (Side-by-side – Plot (interactive))

v) Win Probability with DL model (Overlapping – Plot (static))

The above functionality in plotting the Win Probability using LR or DL with both options (Side-by-side or Overlapping) is available for all 9 T20 leagues currently supported by GooglyPlusPlus.

Go ahead and give gpp2023-1 a try!!!

Do also check out my other posts’

  1. Deep Learning from first principles in Python, R and Octave – Part 7
  2. Big Data 6: The T20 Dance of Apache NiFi and yorkpy
  3. Latency, throughput implications for the Cloud
  4. Design Principles of Scalable, Distributed Systems
  5. Cricpy adds team analytics to its arsenal!!
  6. Analyzing performances of cricketers using cricketr template
  7. Modeling a Car in Android
  8. Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket
  9. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
  10. Experiments with deblurring using OpenCV
  11. Using embeddings, collaborative filtering with Deep Learning to analyse T20 players

To see all posts click Index of posts

GooglyPlusPlus now with Win Probability Analysis for all T20 matches

In my 2 earlier posts Computing Win-Probability of T20 matches and Boosting Win Probability accuracy with player embeddings I had discussed the approaches to computing ball-by-ball Win Probability of a T20 match. My best ML models were.

  • glmnet – Logistic Regression(LR) with lasso regularization and penalty – Accuracy – 0.73
  • Random Forest (RF) – Accuracy – 0.92

Incidentally, both these models can be used on live streaming ball-by-ball data if available

I have now integrated the trained ML Logistic Regression model with penalty into my Shiny app GooglyPlusPlus. Unfortunately, the Random Forest model, besides being computationally intensive is also heavy-weight (1.29GB) when compared to LR model which is just 91.2 MB. So, I was not able to upload the Random Forest model to Shiny as the memory allowed exceeded that allowed in my paid subscription.

However, I will demonstrate the performance of both models, LR ( in my Web app) and RF (in my local machine). Incidentally the Random Forest model takes a long time to load and even longer (~90 secs) to compute the Win Probability of a T20 match, while the LR model computes in a few seconds. Interestingly, I find the LR model’s Win Probability more intuitive and explainable than the Random Forest. Possibly, the RF model overfits. I need to explore this more. Anyway, take a look at some interesting Win Probability Charts (fortune swings of teams!!!) over the course of the T20 match.

You can try out this latest version here at GooglyPlusPlus !!

Some major upsets in the ICC T20 World Cup, 2022

A) Netherlands vs South Africa – 2022-11-06

B) Zimbabwe vs Pakistan – 2022-10-27

1a) Netherlands vs South Africa – ICC 2022-11-06 (Worm-wicket chart)

Netherlands shocked South Africa and ended South Africa’s hopes for a place in the semi-finals. The match worm-wicket chart for this match is shown below

The 2 circled areas are where the South Africa lost the plot around the 8th over (~120+48=168) and 15th over (~120+90=210)

Around 205-215 ball of the innings South Africa started to lose

1b) Netherlands vs South Africa – ICC 2022-11-06 – Logistic Regression with regularisation (Shiny)

1c) 1b) Netherlands vs South Africa – ICC 2022-11-06 – Random Forest (not in Web app, local)

If you notice, for some reason, Random Forest model decided that Netherland was on the winning side, right from the start. Why would this happen? Possibly overfitting, I presume…

2a) Zimbabwe vs Pakistan – ICC 2022-10-27 Worm-wicket chart

Pakistan seemed to be cruising along with finally 11 runs in the last over, and for some reason they panicked and lost.

2a) Zimbabwe vs Pakistan -ICC 2022 – 2022-10-27 – Logistic Regression with regularisation (Shiny)

It can be seen that Pakistan did seem to have the upper hand , save the last over.

2a) Zimbabwe vs Pakistan ICC 2022-10-27 – Random Forest (not in Web app, local)

Again the Random Forest model implies that Zimbabwe was on a winning foot except in brief stretches for e.g ball 248 of the innings

So while the accuracy of Random Forest model is better by about ~20% I feel it is the Logistic Regression with penalty has generalised better and is more intuitive. Meanwhile, I will see if I can improve LR or try another model which can provide better accuracy besides generalising well

Henceforth, I will only be using the LR model that is in the Shiny app

3a) England vs New Zealand T20 Women – 2021-09-04

Another close match till the 15th over. After that England’s seems to have had a slower strike rate and lost

3b) England vs New Zealand T20 Women – 2021-09-04 – Logistic Regression

4a) Chennai Super Kings vs Gujarat Titans (IPL 2022) – Worm wicket chart

4a) Chennai Super Kings vs Gujarat Titans (IPL 2022) – Logistic Regression

5a) Islamabad United vs Peshawar Zalmi -2021-06-17 – Worm wicket chart

This match seems to be close, with both worms inter-twined almost all the way

5b) Islamabad United vs Peshawar Zalmi -2021-06-17 – Logistic Regression

According to the model Peshawar Zalmi lost the game around 14-15th over

Feel free to play around with the latest GooglyPlusPlus

Conclusion

Meanwhile I will try to come with a better model which executes fast, generalises well and is accurate. Tall order, no doubt!!!

Till such time play around with GooglyPlusPlus

Also check out my other posts

  1. Using embeddings, collaborative filtering with Deep Learning to analyse T20 players
  2. Computer Vision: Ramblings on derivatives, histograms and contours
  3. Deep Learning from first principles in Python, R and Octave – Part 4
  4. TWS-4: Gossip protocol: Epidemics and rumors to the rescue
  5. How to program – Some essential tips
  6. Cricpy performs granular analysis of players
  7. Analyzing World Bank data with WDI, googleVis Motion Charts
  8. Practical Machine Learning with R and Python – Part 5
  9. Presentation on “Intelligent Networks, CAMEL protocol, services & applications

To see all posts click Index of posts

Computing Win-Probability of T20 matches

I am late to the ‘Win probability’ computation for T20 matches, but managed to jump on to this bus with this post. Win Probability analysis and computation have been around for some time and are used in baseball, NFL, soccer hockey and others. On T20 cricket, the following posts from White Ball Analytics & Sports Data Science were good pointers to the general approach. The data for the Win Probability computation is taken from Cricsheet.

My initial Machine Learning models could not do better than 62% accuracy. I created a data set of ~830 IPL matches which roughly came to about 280,000 rows of ball-by-ball match data but I could not move beyond 62%. Addition of T20 men moved the needle to 64% accuracy. I spent time tuning Deep Learning networks using Tensorflow and Keras. Finally, I added T20 data from 9 T20 leagues – IPL, T20 men, T20 women, BBL, CPL, NTB, PSL, WBB, SSM. I had one large data set of 1.2 million rows of ball by ball data. The data frame looks like

I created a data frame for each match from ball Num 1 to ballNum ~240 for the 1st and 2nd innings of the match. My initial set of features were ballNum, runs, runRate, numWickets. The target variable isWinner= {0,1} depending on whether the team has won or lost the match.

The features

  • ballNum – ball number for 1 ~ 240+ in data frame. 1 – 120+ for 1st innings and 120+ – 240+ in 2nd innings including noballs, wides etc.
  • runs = cumulative runs scored at the ball count
  • runRate = cumulative runs scored/ ballNum (for 1st innings) and runs= required runs/ball Num for 2nd innings
  • numWickets = wickets lost

The target variable isWinner can take values {0,1} depending whether the team won or lost

With this initial dataframe, even though I had close to 1.2 million rows of ball by ball data of T20 matches my best performance with vanilla Logistic regression & SVM in Python was about 64% accuracy.

# Read all the data from 9 T20 leagues
# BBL,CPL, IPL, NTB, PSL, SSM, T20 Men, T20 Women, WBB
df1=pd.read_csv('matchesT20M.csv')
df2=pd.read_csv('matchesIPL.csv')
df3=pd.read_csv('matchesBBL.csv')
df4=pd.read_csv('matchesCPL.csv')
df5=pd.read_csv('matchesNTB.csv')
df6=pd.read_csv('matchesPSL.csv')
df7=pd.read_csv('matchesSSM.csv')
df8=pd.read_csv('matchesT20W.csv')
df9=pd.read_csv('matchesWBB.csv')

# Create one large dataframe
df10=pd.concat([df1,df2,df3,df4,df5,df6,df7,df8,df9])
print("Shape of dataframe=",df10.shape)
print("#####################################")
stats=check_values(df10)
print("#####################################")
model_fit(df10)
#norm_model_fit(df,stats)
svm_model_fit(df10)

Shape of dataframe= (1206901, 6)
#####################################
Null values: False
It contains 0 infinite values

Accuracy of Logistic regression classifier on training set: 0.63
Accuracy of Logistic regression classifier on test set: 0.64
Accuracy: 0.64
Precision: 0.62
Recall: 0.65
F1: 0.64


Accuracy of Linear SVC classifier on training set: 0.52
Accuracy of Linear SVC classifier on test set: 0.52

With Tensorflow/Keras the performance was about 67%. I tried several things

  • Normalisation
  • Tried different learning rates
  • Different optimisers – SGD, RMSProp, Adam
  • Changed depth and width of Neural Network

However I did not get much improvement. Finally I decided to do some Feature engineering. I added 2 new features

a) Runs Momentum : This feature is based on the fact that more the wickets in hand, the more freely the batsmen can make risky strokes, hence increasing the momentum of the runs, This is calculated as

runsMomentum = (11 – numWickets)/balls remaining

b) Performance Index: This feature is the product of the run rate x wickets in hand. In other words, if the strike rate is good and fewer wickets lost at the point in the match, then the performance index is higher at that point in the match will be higher

The final set of features chosen were as below

I had also included the balls Remaining in the innings. Now with this set of features I decided to execute Tensorflow/Keras and do a GridSearch with different learning rates, optimisers. After a couple of hours of computation I got an accuracy of 0.73. I needed to be able to read the ML model in R which required installation of Tensorflow, reticulate and Keras in RStudio and I had several issues. Since I hit a roadblock I moved to regular R models

I performed WIn Probability computation in the following ways

A) Win Probability with Vanilla Logistic Regression (R)

With vanilla Logistic Regression in R using the ‘glm’ package I got an accuracy of 0.67, sensitivity of 0.68 and specificity of 0.65 as shown below

library(dplyr)
library(caret)
library(e1071)
library(ggplot2)

# Read all the data from 9 T20 leagues
# BBL,CPL, IPL, NTB, PSL, SSM, T20 Men, T20 Women, WBB
df1=read.csv("output2/matchesBBL2.csv")
df2=read.csv("output2/matchesCPL2.csv")
df3=read.csv("output2/matchesIPL2.csv")
df4=read.csv("output2/matchesNTB2.csv")
df5=read.csv("output2/matchesPSL2.csv")
df6=read.csv("output2/matchesSSM2.csv")
df7=read.csv("output2/matchesT20M2.csv")
df8=read.csv("output2/matchesT20W2.csv")
df9=read.csv("output2/matchesWBB2.csv")

# Create one large dataframe
df=rbind(df1,df2,df3,df4,df5,df6,df7,df8,df9)

# Helper function to split into training/test
trainTestSplit <- function(df,trainPercent,seed1){
  ## Sample size percent
  samp_size <- floor(trainPercent/100 * nrow(df))
  ## set the seed 
  set.seed(seed1)
  idx <- sample(seq_len(nrow(df)), size = samp_size)
  idx
  
}

train_idx <- trainTestSplit(df,trainPercent=80,seed=5)
train <- df[train_idx, ]

test <- df[-train_idx, ]
# Fit a generalized linear logistic model, 
fit=glm(isWinner~.,family=binomial,data=train,control = list(maxit = 50))

a=predict(fit,newdata=train,type="response")
# Set response >0.5 as 1 and <=0.5 as 0
b=as.factor(ifelse(a>0.5,1,0))
# Compute the confusion matrix for training data

confusionMatrix(
  factor(b, levels = 0:1),
  factor(train$isWinner, levels = 0:1)
)

Confusion Matrix and Statistics

          Reference
Prediction    
  0      1
         0 339938 160336
         1 154236 310217
                                         
               Accuracy : 0.6739         
                 95% CI : (0.673, 0.6749)
    No Information Rate : 0.5122         
    P-Value [Acc > NIR] : < 2.2e-16      
                                         
                  Kappa : 0.3473         
                                         
 Mcnemar's Test P-Value : < 2.2e-16      
                                         
            Sensitivity : 0.6879         
            Specificity : 0.6593         
         Pos Pred Value : 0.6795         
         Neg Pred Value : 0.6679         
             Prevalence : 0.5122         
         Detection Rate : 0.3524         
   Detection Prevalence : 0.5186         
      Balanced Accuracy : 0.6736         
                                         
       'Positive' Class : 0      

# This can be saved and loaded as    
saveRDS(fit, "glm.rds")
ml_model <- readRDS("glm.rds")    

Using the above ML model on Deccan Chargers vs Chennai Super on 27-04-2009 the Win Probability as the match progresses is as below

The Worm wicket graph of this match shows it was a closely fought match

B) Win Probability using Random Forests with Tidy Models – R

Initially I tried Tidy models with tuning for glmnet. The best I got was 0.67. However, I got an excellent performance using TidyModels with Random Forests. I am using Tidy Models for the first time and I have been blown away with how logically it is constructed, much like dplyr & ggplot2.

library(dplyr)
library(caret)
library(e1071)
library(ggplot2)
library(tidymodels)  

# Helper packages
library(readr)       # for importing data
library(vip) 
library(ranger)
# Read all the data from 9 T20 leagues
# BBL,CPL, IPL, NTB, PSL, SSM, T20 Men, T20 Women, WBB

df1=read.csv("output2/matchesBBL2.csv")
df2=read.csv("output2/matchesCPL2.csv")
df3=read.csv("output2/matchesIPL2.csv")
df4=read.csv("output2/matchesNTB2.csv")
df5=read.csv("output2/matchesPSL2.csv")
df6=read.csv("output2/matchesSSM2.csv")
df7=read.csv("output2/matchesT20M2.csv")
df8=read.csv("output2/matchesT20W2.csv")
df9=read.csv("output2/matchesWBB2.csv")

# Create one large dataframe
df=rbind(df1,df2,df3,df4,df5,df6,df7,df8,df9)

dim(df)
[1] 
1205909       8

# Take a peek at the dataset
glimpse(df)
$ ballNum        <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28…
$ ballsRemaining <int> 125, 124, 123, 122, 121, 120, 119, 118, 117, 116, 115, 114, 113, 112, 111, 110, 109, 108, 107, 106, 1…
$ runs           <int> 1, 1, 2, 3, 3, 3, 4, 4, 5, 5, 6, 7, 13, 14, 16, 18, 18, 18, 24, 24, 24, 26, 26, 32, 32, 33, 34, 34, 3…
$ runRate        <dbl> 1.0000000, 0.5000000, 0.6666667, 0.7500000, 0.6000000, 0.5000000, 0.5714286, 0.5000000, 0.5555556, 0.…
$ numWickets     <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3,…
$ runsMomentum   <dbl> 0.08800000, 0.08870968, 0.08943089, 0.09016393, 0.09090909, 0.09166667, 0.09243697, 0.09322034, 0.094…
$ perfIndex      <dbl> 11.000000, 5.500000, 7.333333, 8.250000, 6.600000, 5.500000, 6.285714, 5.500000, 6.111111, 5.000000, …
$ isWinner       <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…

df %>% 
  count(isWinner) %>% 
  mutate(prop = n/sum(n))

set.seed(123)
df$isWinner = as.factor(df$isWinner)

# Split the data into training and test set in 80%:20%
splits      <- initial_split(df,prop = 0.80)
df_other <- training(splits)
df_test  <- testing(splits)

# Create a validation set from training set in 80%:20%
set.seed(234)
val_set <- validation_split(df_other, 
                            prop = 0.80)
val_set

# Setup for Random forest using Ranger for classification
# Set up cores for parallel execution
cores <- parallel::detectCores()
cores

#Set up Random Forest engine
rf_mod <- 
  rand_forest(mtry = tune(), min_n = tune(), trees = 1000) %>% 
  set_engine("ranger", num.threads = cores) %>% 
  set_mode("classification")

rf_mod
# The Random Forest engine includes mtry which is number of predictor 
# variables required at each decision  tree with min_n the minimum number # of 
Random Forest Model Specification (classification)

Main Arguments:
  mtry = tune()
  trees = 1000
  min_n = tune()

Engine-Specific Arguments:
  num.threads = cores

Computational engine: ranger


# Setup the predictors and target variable
# Normalise all predictors. Random Forest don't need normalization but
# I have done it anyway
rf_recipe <-
  recipe(isWinner ~ ., data = df_other) %>% 
  step_normalize(all_predictors())

# Create workflow adding the ML model and recipe
rf_workflow <- 
  workflow() %>% 
  add_model(rf_mod) %>% 
  add_recipe(rf_recipe)

# The tune is done for 5 different values of the tuning parameters.
# Metrics include accuracy and roc_auc
rf_res <- 
  rf_workflow %>% 
  tune_grid(val_set,
            grid = 5,
            control = control_grid(save_pred = TRUE),
            metrics = metric_set(accuracy,roc_auc))

$ Pick the best of ROC/AUC
rf_res %>% 
  show_best(metric = "roc_auc")

We can see that when mtry (number of predictors) is 5 or 7 the ROC_AUC is 0.834 which is quite good

# A tibble: 5 × 8
   mtry min_n .metric .estimator  mean     n std_err .config             
  <int> <int> <chr>   <chr>      <dbl> <int>   <dbl> <chr>               
1     5    26 roc_auc binary     0.834     1      NA Preprocessor1_Model5
2     7    36 roc_auc binary     0.834     1      NA Preprocessor1_Model3
3     2    17 roc_auc binary     0.833     1      NA Preprocessor1_Model4
4     1    20 roc_auc binary     0.832     1      NA Preprocessor1_Model2
5     5     6 roc_auc binary     0.825     1      NA Preprocessor1_Model1


# Select the model with highest accuracy
rf_res %>% 
  show_best(metric = "accuracy")
   mtry min_n .metric  .estimator  mean     n std_err .config             
  <int> <int> <chr>    <chr>      <dbl> <int>   <dbl> <chr>               
1     7    36 accuracy binary     0.737     1      NA Preprocessor1_Model3
2     5    26 accuracy binary     0.736     1      NA Preprocessor1_Model5
3     1    20 accuracy binary     0.736     1      NA Preprocessor1_Model2
4     2    17 accuracy binary     0.735     1      NA Preprocessor1_Model4
5     5     6 accuracy binary     0.731     1      NA Preprocessor1_Model1

# The model with mtry (number of predictors) is 7 has the best accuracy. 
# Hence the best model has mtry=7 and min_n=36

rf_best <- 
  rf_res %>% 
  select_best(metric = "accuracy")

# Display the best model
rf_best
# A tibble: 1 × 3
   mtry min_n .config             
  <int> <int> <chr>               
1     7    36 Preprocessor1_Model3


rf_res %>% 
  collect_predictions()
   id         .pred_class  .row  mtry min_n .pred_0  .pred_1 isWinner .config             
   <chr>      <fct>       <int> <int> <int>   <dbl>    <dbl> <fct>    <chr>               
 1 validation 1               1     5     6 0.497   0.503    0        Preprocessor1_Model1
 2 validation 1               9     5     6 0.00753 0.992    1        Preprocessor1_Model1
 3 validation 0              10     5     6 0.627   0.373    0        Preprocessor1_Model1
 4 validation 0              16     5     6 0.998   0.002    0        Preprocessor1_Model1
 5 validation 1              18     5     6 0.270   0.730    1        Preprocessor1_Model1
 6 validation 0              23     5     6 0.899   0.101    0        Preprocessor1_Model1
 7 validation 1              26     5     6 0.452   0.548    1        Preprocessor1_Model1
 8 validation 0              30     5     6 0.657   0.343    1        Preprocessor1_Model1
 9 validation 0              34     5     6 0.576   0.424    0        Preprocessor1_Model1
10 validation 0              35     5     6 1.00    0.000167 0        Preprocessor1_Model1

rf_auc <- 
  rf_res %>% 
  collect_predictions(parameters = rf_best) %>% 
  roc_curve(isWinner, .pred_0) %>% 
  mutate(model = "Random Forest")

autoplot(rf_auc)

I

The Final Model

# Create the final Random Forest model with mtry=7 and min_n=36
# engine as "ranger" for classification
last_rf_mod <- 
  rand_forest(mtry = 7, min_n = 36, trees = 1000) %>% 
  set_engine("ranger", num.threads = cores, importance = "impurity") %>% 
  set_mode("classification")


# the last workflow is updated with the final model
last_rf_workflow <- 
  rf_workflow %>% 
  update_model(last_rf_mod)

set.seed(345)
last_rf_fit <- 
  last_rf_workflow %>% 
  last_fit(splits)

# Collect metrics
last_rf_fit %>% 
  collect_metrics()
  .metric  .estimator .estimate .config             
  <chr>    <chr>          <dbl> <chr>               
1 accuracy binary         0.739 Preprocessor1_Model1
2 roc_auc  binary         0.837 Preprocessor1_Model1

The Random Forest model gives an accuracy of 0.739 and ROC_AUC of .837 which I think is quite good. This is roughly what I got with Tensorflow/Keras

# Get the feature importance 
last_rf_fit %>% 
  extract_fit_parsnip() %>% 
  vip(num_features = 7)

Interestingly the feature that I engineered seems to have the maximum importancce namely Performance Index which is a product of Run rate x Wicket in Hand. I would have thought numWickets would be important but in T20 match probably is is not.

 generate predictions from the test set
test_predictions <- last_rf_fit %>% collect_predictions()
> test_predictions
# A tibble: 241,182 × 7
id               .pred_0 .pred_1  .row .pred_class isWinner .config             
<chr>              <dbl>   <dbl> <int> <fct>       <fct>    <chr>               
  1 train/test split   0.496   0.504     1 1           0        Preprocessor1_Model1
2 train/test split   0.640   0.360    11 0           0        Preprocessor1_Model1
3 train/test split   0.596   0.404    14 0           0        Preprocessor1_Model1
4 train/test split   0.287   0.713    22 1           0        Preprocessor1_Model1
5 train/test split   0.616   0.384    28 0           0        Preprocessor1_Model1
6 train/test split   0.516   0.484    36 0           0        Preprocessor1_Model1
7 train/test split   0.754   0.246    37 0           0        Preprocessor1_Model1
8 train/test split   0.641   0.359    39 0           0        Preprocessor1_Model1
9 train/test split   0.811   0.189    40 0           0        Preprocessor1_Model1
10 train/test split   0.618   0.382    42 0           0        Preprocessor1_Model1


# generate a confusion matrix
test_predictions %>% 
  conf_mat(truth = isWinner, estimate = .pred_class)

          Truth
Prediction     0     1
         0 92173 31623
         1 31320 86066

# Create the final model on the train/test data
final_model <- fit(last_rf_workflow, df_other)

# Final model
final_model
══ Workflow [trained] ════════════════════════════════════════════════════════════════════════════════════════════════════════
Preprocessor: Recipe
Model: rand_forest()

── Preprocessor ──────────────────────────────────────────────────────────────────────────────────────────────────────────────
1 Recipe Step

• step_normalize()

── Model ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Ranger result

Call:
 ranger::ranger(x = maybe_data_frame(x), y = y, mtry = min_cols(~7,      x), num.trees = ~1000, min.node.size = min_rows(~36, x),      num.threads = ~cores, importance = ~"impurity", verbose = FALSE,      seed = sample.int(10^5, 1), probability = TRUE) 

Type:                             Probability estimation 
Number of trees:                  1000 
Sample size:                      964727 
Number of independent variables:  7 
Mtry:                             7 
Target node size:                 36 
Variable importance mode:         impurity 
Splitrule:                        gini 
OOB prediction error (Brier s.):  0.1631303

The Random Forest Model’s performance has been quite impressive and probably requires further exploration.

# Saving and loading the model
save(final_model, file = "fit.rda")
load("fit.rda")

#Predicting the Win Probability of CSK vs DD match on 12 May 2012

Comparing this with the Worm wicket graph of this match we see that DD had no chance at all

C) Win Probability with Tensorflow/Keras with Grid Search – Python

I spent a fair amount of time tuning the hyper parameters of the Keras Deep Learning Network. Finally did go for the Grid Search. Incidentally I did ask ChatGPT to suggest code snippets for GridSearch which it promptly did!!!

import pandas as pd
import numpy as np
from zipfile import ZipFile
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from sklearn.model_selection import GridSearchCV

# Define the model
def create_model(optimizer='adam'):
    tf.random.set_seed(4)
    model = tf.keras.Sequential([
        keras.layers.Dense(32, activation=tf.nn.relu, input_shape=[len(train_dataset1.keys())]),
        keras.layers.Dense(16, activation=tf.nn.relu),
        keras.layers.Dense(8, activation=tf.nn.relu),
        keras.layers.Dense(1,activation=tf.nn.sigmoid)
    ])

    # Since this is binary classification use binary_crossentropy
    model.compile(loss='binary_crossentropy',
                    optimizer=optimizer,
                    metrics='accuracy')
    return(model)

    # Create a KerasClassifier object
model = keras.wrappers.scikit_learn.KerasClassifier(build_fn=create_model)

# Define the grid of hyperparameters to search over
batch_size = [1024]
epochs = [40]
learning_rate = [0.01, 0.001, 0.0001]
optimizer = ['SGD', 'RMSprop', 'Adagrad', 'Adadelta', 'Adam', 'Adamax', 'Nadam']

param_grid = dict(dict(optimizer=optimizer,batch_size=batch_size, epochs=epochs) )
# Create the grid search object
grid_search = GridSearchCV(estimator=model, param_grid=param_grid, cv=3)

# Fit the grid search object to the training data
grid_search.fit(normalized_train_data, train_labels)

# Print the best hyperparameters
print('Best hyperparameters:', grid_search.best_params_)
# summarize results
print("Best: %f using %s" % (grid_search.best_score_, grid_search.best_params_))
means = grid_search.cv_results_['mean_test_score']
stds = grid_search.cv_results_['std_test_score']
params = grid_search.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
    print("%f (%f) with: %r" % (mean, stdev, param))

The best worked out to be the optimiser ‘Nadam’ with a learning rate of 0.001

import matplotlib.pyplot as plt
# Create a model
tf.random.set_seed(4)
model = tf.keras.Sequential([
    keras.layers.Dense(32, activation=tf.nn.relu, input_shape=[len(train_dataset1.keys())]),
    keras.layers.Dense(16, activation=tf.nn.relu),
    keras.layers.Dense(8, activation=tf.nn.relu),
    keras.layers.Dense(1,activation=tf.nn.sigmoid)
  ])

# Use the Nadam optimiser
optimizer=keras.optimizers.Nadam(learning_rate=.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07, decay=0.0)

# Since this is binary classification use binary_crossentropy
model.compile(loss='binary_crossentropy',
                optimizer=optimizer,
                metrics='accuracy')

# Fit 
#history=model.fit(
#  train_dataset1, train_labels,batch_size=1024,
#  epochs=40, validation_data=(test_dataset1,test_labels), verbose=1)
history=model.fit(
  normalized_train_data, train_labels,batch_size=1024,
  epochs=40, validation_data=(normalized_test_data,test_labels), verbose=1)

Epoch 37/40
943/943 [==============================] - 3s 3ms/step - loss: 0.4971 - accuracy: 0.7310 - val_loss: 0.4968 - val_accuracy: 0.7357
Epoch 38/40
943/943 [==============================] - 3s 3ms/step - loss: 0.4970 - accuracy: 0.7310 - val_loss: 0.4974 - val_accuracy: 0.7378
Epoch 39/40
943/943 [==============================] - 4s 4ms/step - loss: 0.4970 - accuracy: 0.7309 - val_loss: 0.4994 - val_accuracy: 0.7296
Epoch 40/40
943/943 [==============================] - 3s 3ms/step - loss: 0.4969 - accuracy: 0.7311 - val_loss: 0.4998 - val_accuracy: 0.7300
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "test"], loc="upper left")
plt.show()

Conclusion

So, the Keras Deep Learning Network gives about the same performance of Random Forest in Tidy Models. But I went with R Random Forest as it was easier to save and load the model for use with my data. Also, I am not sure whether the performance of the ML model can be improved beyond a point. However, I will continue to explore.

Watch this space!!!

Also see

  1. Natural language processing: What would Shakespeare say?
  2. Revisiting World Bank data analysis with WDI and gVisMotionChart
  3. The mechanics of Convolutional Neural Networks in Tensorflow and Keras
  4. Deep Learning from first principles in Python, R and Octave – Part 4
  5. Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR
  6. Latency, throughput implications for the Cloud
  7. Practical Machine Learning with R and Python – Part 4
  8. Pitching yorkpy…swinging away from the leg stump to IPL – Part 3
  9. Experiments with deblurring using OpenCV
  10. Design Principles of Scalable, Distributed Systems

To see all posts click Index of posts

References

  1. White Ball Analytics
  2. Twenty20 Win Probability Added
  3. Tidy models – A predictive modeling case study
  4. Tidymodels: tidy machine learning in R
  5. A gentle introduction to Tidy models
  6. How to Grid Search Hyperparameters for Deep Learning Models in Python with Keras
  7. ChatGPT

Using embeddings, collaborative filtering with Deep Learning to analyse T20 players

There is a school of thought which considers that total runs scored and strike rate for a batsman, or total wickets taken and economy rate for a bowler, do not tell the whole story. This is true to a fair extent. The runs scored or the wickets taken could have been against weaker teams and hence the runs, strike rate or the wickets and economy rate alone do not capture all the performance details of the batsman or bowler. A technique to determine the performance of batsmen against different bowlers and identify the batsman’s possible performance even against bowlers he/she has not yet faced could be done with collaborative filtering. Collaborative filtering, with embeddings can also be used to group players with similar characteristics. Similarly, we could also identify the performance of bowlers versus different batsmen. Hence we need to look at average runs, SR and total wickets, ER with the lens of batsmen, bowlers against similar opposition. This is where collaborative filtering is useful.

The table below shows the performance of all batsman against all bowlers in the table below. The row in the table below is the batsman and the column is the bowler, with the value in the cell is the total Runs scored by the batsman against the bowler in all matches. Note the values are 0 for batsmen who have not yet faced specific bowlers. The table is fairly sparse.

Table A

Similarly, we can compute the performance of all bowlers against all batsmen as in the table below. Here the row is the bowler, the column batsman and the value in the cell is the number of times the bowler got the batsman’s wicket. As before the data is sparsely populated

This problem of computing batsman’s performance against bowlers or vice versa, is identical to the user vs movie rating problem used in collaborative filtering. For e.g we could consider

This above problem depicted could be computed using collaborative filtering with embeddings. We could assign sequential numbers for the batsmen from 1 to M, and for the bowlers from 1 to N. The total runs scored could be represented only for the rows where there are values. One way to solve this problem in Machine Learning is to use One Hot Encoding (OHE), where we assign values for each row and each column and map the values of the table with values of the cell for each combination. But this would take a enormous computation time and memory. The solution to this is use vector embeddings. Here embeddings could be used for capturing the sparse tensors between the batsmen, bowlers, runs scored or vice versa between bowlers against batsmen and the wickets taken. We only need to consider the cells for which values exist. An embedding is a relatively low-dimensional space, into which you can translate high-dimensional vectors. An embedding captures some of the semantics of the input by placing semantically similar inputs close together in the embedding space.

a) To compute bowler performances and identify similarities between bowlers the following embedding in the Deep Learning Network was used

To compute batsmen similarities a similar Deep Learning network for bowler vs batsmen is used

I had earlier created another post Player Performance Estimation using AI Collaborative Filtering for batsman and bowler recommendation, using R package Recommender Lab. However, I was not too happy with the results I got with this R package. When I searched the net for material on using embeddings for collaborative filtering, most of material on the web on movie lens or word2vec are repetitive and have no new material. Finally, this short video lecture from Developer Google on Embeddings provided the most clarity.

I have created 4 Colab notebooks to identify player similarities (recommendations)

a) Batsman similarities IPL

b) Batsman similarities T20

c) Bowler similarities IPL

d) Bowler similarities T20

For creating the model I have used all the data for T20 and IPL from so that I get the best results. The data is from Cricsheet. I have also used Google’s Embeddings Projector to display batsman and bowler embedding to and to group similar players

All the Colab notebooks and the data associated with the code are available in Github. Feel free to download and execute them. See if you get better performance. I tried a wide variety of hyperparameters – learning rate, width and depth of nodes per layer, number of layers, gradient methods etc.

You can download all the code & data from Github at embeddings

A) Batsman Recommender IPL (BatsmanRecommenderIPLA.ipynb)

Steps for creating the model

a) Upload bowler vs batsmen with times wicket was taken for batsman. This will be a sparse matrix

b) Assign integer indices for bowlers, batsmen

c) Add additional input features balls, runs conceded and Economy rate

d) Minimise loss for wickets taken for the bowler using SGD

e) Display embeddings of similar batsmen using Tensorboard projector

a) Upload data

Upload data file
2. Remove rows where wickets = 0

from google.colab import files
import io
uploaded = files.upload()
df2 = pd.read_csv(io.BytesIO(uploaded['bowlerVsBatsmanIPLE.csv']))
print(df2.shape)
df2 = df2.loc[df2['wicketTaken']!= 0]
print(df2.shape)

uploaded = files.upload()
df6 = pd.read_csv(io.BytesIO(uploaded['bowlerVsBatsmanIPLAll.csv']))
df6
     


Out[14]:

bowler1batsman1ballsrunsConcededER
0A Ashish ReddyDJG Sammy100.000000
1A Ashish ReddyG Gambhir101710.200000
2A Ashish ReddyJEC Franklin200.000000
3A Ashish ReddyLRPL Taylor567.200000
4A Ashish ReddyMA Agarwal3714.000000
8550Z KhanVishnu Vinod4812.000000
8551Z KhanVS Malik3510.000000
8552Z KhanW Jaffer732.571429
8553Z KhanYK Pathan22359.545455
8554Z KhanYuvraj Singh12126.000000

b) Create integer dictionaries for batsmen & bowlers

bowlers = df3["bowler1"].unique().tolist()
bowlers
# Create dictionary of bowler to index
bowlers2index = {x: i for i, x in enumerate(bowlers)}
bowlers2index
#Create dictionary of index tp bowler
index2bowlers = {i: x for i, x in enumerate(bowlers)}
index2bowlers


batsmen = df3["batsman1"].unique().tolist()
batsmen
# Create dictionary of batsman to index
batsmen2index = {x: i for i, x in enumerate(batsmen)}
batsmen2index
# Create dictionary of index to batsman
index2batsmen = {i: x for i, x in enumerate(batsmen)}
index2batsmen

#Map bowler, batsman to respective indices
df3["bowler"] = df3["bowler1"].map(bowlers2index)
df3["batsman"] = df3["batsman1"].map(batsmen2index)
df3
num_bowlers =len(bowlers2index)
num_batsmen = len(batsmen2index)
df3["wicketTaken"] = df3["wicketTaken"].values.astype(np.float32)
df3
# min and max ratings will be used to normalize the ratings later
min_wicketTaken = min(df3["wicketTaken"])
max_wicketTaken = max(df3["wicketTaken"])

print(
    "Number of bowlers: {}, Number of batsmen: {}, Min wicketsTaken: {}, Max wicketsTaken: {}".format(
        num_bowlers, num_batsmen, min_wicketTaken, max_wicketTaken
    )
)

c) Concatenate additional features

df3
df6
df31=pd.concat([df3,df6],axis=1)
df31

d) Create a Tensorflow/Keras deep learning mode. Minimise using Mean Squared Error using Stochastic Gradient Descent. I used ‘dropouts’ to regularise the model to keep validation loss within limits

tf.random.set_seed(4)
vector_size=len(batsmen2index)

df4=df31[['bowler','batsman','wicketTaken','balls','runsConceded','ER']]
df4
train_dataset = df4.sample(frac=0.9,random_state=0)
test_dataset = df4.drop(train_dataset.index)

train_dataset1 = train_dataset[['bowler','batsman','balls','runsConceded','ER']]
test_dataset1 = test_dataset[['bowler','batsman','balls','runsConceded','ER']]
train_stats = train_dataset1.describe()
train_stats = train_stats.transpose()
#print(train_stats)

train_labels = train_dataset.pop('wicketTaken')
test_labels = test_dataset.pop('wicketTaken')

# Create a Deep Learning model with keras
model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vector_size,16,input_length=5),
    tf.keras.layers.Flatten(),
    keras.layers.Dropout(.2),
    keras.layers.Dense(16),
 
    keras.layers.Dense(8,activation=tf.nn.relu),
    
    keras.layers.Dense(4,activation=tf.nn.relu),
    keras.layers.Dense(1)
  ])

# Print the model summary
#model.summary()
# Use the Adam optimizer with a learning rate of 0.01
#optimizer=keras.optimizers.Adam(learning_rate=.0009, beta_1=0.5, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
#optimizer=keras.optimizers.RMSprop(learning_rate=0.01, rho=0.2, momentum=0.2, epsilon=1e-07)
#optimizer=keras.optimizers.SGD(learning_rate=.009,momentum=0.1) - Works without dropout
optimizer=keras.optimizers.SGD(learning_rate=.01,momentum=0.1)

model.compile(loss='mean_squared_error',
                optimizer=optimizer,
                )

 # Setup the training parameters
#model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
# Create a model
history=model.fit(
  train_dataset1, train_labels,batch_size=32,
  epochs=40, validation_data = (test_dataset1,test_labels), verbose=1)

e) Plot losses

f) Predict wickets that will be taken by bowlers against random batsmen


df5= df4[['bowler','batsman','balls','runsConceded','ER']]
test1 = df5.sample(n=10)
test1.shape
for i in range(test1.shape[0]):
      print('Bowler :', index2bowlers.get(test1.iloc[i,0]), ", Batsman : ",index2batsmen.get(test1.iloc[i,1]), '- Times wicket Prediction:',model.predict(test1.iloc[[i]]))
1/1 [==============================] - 0s 90ms/step
Bowler : Harbhajan Singh , Batsman :  AM Nayar - Times wicket Prediction: [[1.0114906]]
1/1 [==============================] - 0s 18ms/step
Bowler : T Natarajan , Batsman :  Arshdeep Singh - Times wicket Prediction: [[0.98656166]]
1/1 [==============================] - 0s 19ms/step
Bowler : KK Ahmed , Batsman :  A Mishra - Times wicket Prediction: [[1.0504484]]
1/1 [==============================] - 0s 24ms/step
Bowler : M Muralitharan , Batsman :  F du Plessis - Times wicket Prediction: [[1.0941994]]
1/1 [==============================] - 0s 25ms/step
Bowler : SK Warne , Batsman :  DR Smith - Times wicket Prediction: [[1.0679393]]
1/1 [==============================] - 0s 28ms/step
Bowler : Mohammad Nabi , Batsman :  Ishan Kishan - Times wicket Prediction: [[1.403399]]
1/1 [==============================] - 0s 32ms/step
Bowler : R Bhatia , Batsman :  DJ Thornely - Times wicket Prediction: [[0.89399755]]
1/1 [==============================] - 0s 26ms/step
Bowler : SP Narine , Batsman :  MC Henriques - Times wicket Prediction: [[1.1997008]]
1/1 [==============================] - 0s 19ms/step
Bowler : AS Rajpoot , Batsman :  K Gowtham - Times wicket Prediction: [[0.9911405]]
1/1 [==============================] - 0s 21ms/step
Bowler : K Rabada , Batsman :  P Simran Singh - Times wicket Prediction: [[1.0064855]]

g) The embedding can be visualised using Google’s Embedding Projector, which identifies other batsmen who have similar characteristics. Here Cosine Similarity is used for grouping similar batsmen of IPL

The closest neighbor for AB De Villiers in IPL is SK Raina, then Rohit Sharma as seen in the visualisation below

B. Bowler Recommender T20 (BowlerRecommenderT20M1A.ipynb)

Similar to how batsman was set up,

The steps are

a) Upload data for T20 Batsman vs Bowler with Total runs scored. This will be a sparse matrix

b) Create integer dictionaries for batsman & bowler

c) Add additional features like fours, sixes and strike rate

d) Minimise loss for wicket taken

e) Display embeddings of bowlers using Tensorboard Embeddings Projector

Minimizing the loss for wicket taken using SGD

tf.random.set_seed(4)
vector_size=len(batsman2index)

#Normalize target variable
df4=df31[['bowler','batsman','totalRuns','fours','sixes','ballsFaced']]
df4['normalizedRuns'] = (df4['totalRuns'] -df4['totalRuns'].mean())/df4['totalRuns'].std()
print(df4)
train_dataset = df4.sample(frac=0.8,random_state=0)
test_dataset = df4.drop(train_dataset.index)
train_dataset1 = train_dataset[['batsman','bowler','fours','sixes','ballsFaced']]
test_dataset1 = test_dataset[['batsman','bowler','fours','sixes','ballsFaced']]

train_labels = train_dataset.pop('normalizedRuns')
test_labels = test_dataset.pop('normalizedRuns')
train_labels
print(train_dataset1)

# Create a Deep Learning model with keras
model = tf.keras.Sequential([
    tf.keras.layers.Embedding(vector_size,16,input_length=5),
    tf.keras.layers.Flatten(),
    keras.layers.Dropout(.2),
    keras.layers.Dense(16),
 
    keras.layers.Dense(8,activation=tf.nn.relu),
    
    keras.layers.Dense(4,activation=tf.nn.relu),
    keras.layers.Dense(1)
  ])

# Print the model summary
#model.summary()
# Use the Adam optimizer with a learning rate of 0.01
#optimizer=keras.optimizers.Adam(learning_rate=.0009, beta_1=0.5, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
#optimizer=keras.optimizers.RMSprop(learning_rate=0.001, rho=0.2, momentum=0.2, epsilon=1e-07)
#optimizer=keras.optimizers.SGD(learning_rate=.009,momentum=0.1) - Works without dropout
optimizer=keras.optimizers.SGD(learning_rate=.01,momentum=0.1)

model.compile(loss='mean_squared_error',
                optimizer=optimizer,
                )

 # Setup the training parameters
#model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
# Create a model
history=model.fit(
  train_dataset1, train_labels,batch_size=32,
  epochs=40, validation_data = (test_dataset1,test_labels), verbose=1)
model.predict(train_dataset1[1:10])
df5= df4[['batsman','bowler','fours','sixes','ballsFaced']]
test1 = df5.sample(n=10)
model.predict(test1)
#(model.predict(test1)* df4['totalRuns'].std()) + df4['totalRuns'].mean()
for i in range(test1.shape[0]):
        print('Batsman :', index2batsman.get(test1.iloc[i,0]), ", Bowler : ",index2bowler.get(test1.iloc[i,1]), '- Total runs Prediction:',(model.predict(test1.iloc[i])* df4['totalRuns'].std()) + df4['totalRuns'].mean())
1/1 [==============================] - 0s 396ms/step
1/1 [==============================] - 0s 112ms/step
1/1 [==============================] - 0s 183ms/step
Batsman : G Chohan , Bowler :  Khawar Ali - Total runs Prediction: [[1.8883028]]
1/1 [==============================] - 0s 56ms/step
Batsman : Umar Akmal , Bowler :  LJ Wright - Total runs Prediction: [[9.305391]]
1/1 [==============================] - 0s 68ms/step
Batsman : M Shumba , Bowler :  Simi Singh - Total runs Prediction: [[19.662743]]
1/1 [==============================] - 0s 30ms/step
Batsman : CH Gayle , Bowler :  RJW Topley - Total runs Prediction: [[16.854687]]
1/1 [==============================] - 0s 39ms/step
Batsman : BA King , Bowler :  Taskin Ahmed - Total runs Prediction: [[3.5154686]]
1/1 [==============================] - 0s 102ms/step
Batsman : KD Shah , Bowler :  Avesh Khan - Total runs Prediction: [[8.411661]]
1/1 [==============================] - 0s 38ms/step
Batsman : ST Jayasuriya , Bowler :  SCJ Broad - Total runs Prediction: [[5.867449]]
1/1 [==============================] - 0s 45ms/step
Batsman : AB de Villiers , Bowler :  Saeed Ajmal - Total runs Prediction: [[15.150892]]
1/1 [==============================] - 0s 46ms/step
Batsman : SV Samson , Bowler :  J Little - Total runs Prediction: [[10.44426]]
1/1 [==============================] - 0s 102ms/step
Batsman : Zawar Farid , Bowler :  GJ Delany - Total runs Prediction: [[1.9770675]]

Identifying similar bowlers using Embeddings Projector for T20

Bhuvaneshwar Kumar’s performance is closest to CR Woakes

Note: Incidentally the accuracy in the above model was not too good. I may work on this again later!

C) Bowler Embeddings IPL – Grouping similar bowlers of IPL with Embeddings Projector (BowlerRecommenderIPLA.ipynb)

D) Batting Embeddings T20 – Grouping similar batsmen of T20 (BatsmanRecommenderT20MA.ipynb)

The Tensorboard Pmbeddings projector is also interesting. There are multiple ways the data can be visualised namely UMAP, T-SNE, PCA(included). You could play with it.

As mentioned above the Colab notebooks and data are available at Github embeddings

The ability to identify batsmen & bowlers who would perform similarly against specific bowling attacks coupled with the average runs & strike rate should give a good measure of a player’s performance.

Take a look at some of my other posts

  1. Using Reinforcement Learning to solve Gridworld
  2. Deep Learning from first principles in Python, R and Octave – Part 4
  3. Big Data 7: yorkr waltzes with Apache NiFi
  4. Programming languages in layman’s language
  5. Pitching yorkpy…swinging away from the leg stump to IPL – Part 3
  6. Re-introducing cricketr! : An R package to analyze performances of cricketers
  7. The making of Total Control Android game
  8. Presentation on “Intelligent Networks, CAMEL protocol, services & applications”
  9. Exploring Quantum Gate operations with QCSimulator

To see all posts click Index of posts

Near Real-time Analytics of ICC Men’s T20 World Cup with GooglyPlusPlus

In my last post GooglyPlusPlus gets ready for ICC Men’s T20 World Cup, I had mentioned that GooglyPlusPlus was preparing for the big event the ICC Men’s T20 World cup. Now that the T20 World cup is underway, my Shiny app in R, GooglyPlusPlus ,will be generating near real-time analytics of matches completed the previous day. Besides the app can also do historical analysis of players, teams and matches.

The whole process is automated. A cron job will execute every day, in the morning, which will automatically download the matches of the previous day from Cricsheet, unzip them, start a pipeline which will transform and process the match data into necessary folders and finally upload the newly acquired data into my Shiny app. Hence, you will be able to access all the breathless, pulsating cricketing action in timeless, interactive plots and tables which will capture all aspects of Men’s T20 matches, namely batsman, bowler performance, match analysis, team-vs-team, team-vs-all teams besides ranking of batsmen & bowlers. Since the data is cumulative, all the analytics are historical and current.

Check out GooglyPlusPlus!!

The data for GooglyPlusPlus is taken from Cricsheet

Interest in cricket, has mushroomed in recent times around the world, with the addition of new formats which started with ODI, T20, T10, 100 ball and so on. There are leagues which host these matches at different levels around the world. While GooglyPlusPlus, provides near real-time analytics of Men’s T20 World cup, we can clearly envision a big data platform which ingests matches daily from multiple cricket formats, leagues around the world generating real-time and near real-time analytics which are essential these days to selection of teams at different levels through auctions. For more discussion on this see my posts

  1. Big Data 7: yorkr waltzes with Apache NiFi
  2. Big Data 6: The T20 Dance of Apache NiFi and yorkpy

We could imagine a Data Lake, into which are ingested data from the different cricket formats, leagues through appropriate technology connectors. Once the data is ingested, we could have data pipelines, based on Azure ADF, Apache NiFi, Apache Airflow or Amazon EMR etc., to transform, process and enhance the data, generating real-time analytics on the fly. Recent formats like T20, T10 require more urgency in strategic thinking based on scoring within limited overs, or containing batsmen from going on a rampage within the set of overs, the analytics on a fly may help the coach to modify the batting or bowling lineup at points in match. In this context see my earlier post Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket

All of these are not just possible, but are likely to become reality as more and more formats, leagues and cricket data proliferate around the world.

This post, focuses on generating near-real time analytics for ICC Men’s T20 World Cup using GooglyPlusPlus. Included below, is a sampling of the analytics that you can perform for analysing the matches. In addition you can do all the analysis included in my post GooglyPlusPlus gets ready for ICC Men’s T20 World Cup

  1. Namibia-Sri Lanka-16 Oct 2022 : Match Worm graph

The opening match between Namibia vs Sri Lanka resulted in an upset. We can see this in the match worm-wicket graph below

2. Scotland vs West Indies – 17 Oct 2022: Batsmen vs Bowlers

George Munsey was the top scorer for Scotland and was instrumental in the win against WI. His performance against West Indies bowlers is shown below. Note, the charts are interactive

3. Zimbabwe vs Ireland – 17 Oct 2022 : Team Runs vs SR

Sikander Raza of Zimbabwe with 82 runs with the strike rate ~ 170

4. United Arab Emirates vs Netherlands – 16 Oct 2022: Team runs across 20 overs

UAE pipped Netherlands in the middle overs and were able to win by 1 ball and 3 wickets

5. Scotland vs Ireland – 19 Oct 2022 : Team Runs vs SR Middle overs plot

Curtis Campher snatched the game away from Scotland with his stellar performance in middle and death overs

6. UAE vs Namibia : 20 Oct 2022 : Team Wickets vs ER plot

Basoor Hameed and Zahoor Khan got 2 wickets apiece with an economy rate of ~5.00 but still they were not able to stop UAE from stealing a win

7. Overall Runs vs SR in T20 World Cup 2022

It is too early to rank the players, nevertheless in the current T20 World Cup, MP O’Dowd (Netherlands), BKG Mendis (Sri Lanka) and JN Frylinck(Namibia) are the top 3 batsmen with good runs and Strike Rate

8. Overall Wickets over ER in T20 World Cup 2022

The top 3 bowlers so far in T20 World Cup 2022 are a) BFW de Leede (Netherlands) b) PWH De Silva (Sri Lanka) c) KP Meiyappan (UAE) with a total of 7,7, and 6 wickets respectively

Note: Besides the match analysis GooglyPlusPlus also provides detailed analysis of batsmen, bowlers, matches as above, team-vs-team, team-vs-all teams, ranking of batsmen & bowlers etc. For more details see my post GooglyPlusPlus gets ready for ICC Men’s T20 World Cup

Do visit GooglyPlusPlus everyday to check out the cricketing actions of matches gone by. You can also follow me on twitter @tvganesh_85 for daily highlights.

You may also like

  1. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
  2. De-blurring revisited with Wiener filter using OpenCV
  3. Using Reinforcement Learning to solve Gridworld
  4. Deep Learning from first principles in Python, R and Octave – Part 3
  5. Getting started with Tensorflow, Keras in Python and R
  6. Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR
  7. Practical Machine Learning with R and Python – Part 5
  8. Cricpy takes a swing at the ODIs
  9. Video presentation on Machine Learning, Data Science, NLP and Big Data – Part 1

To see all posts click Index of posts

GooglyPlusPlus gets ready for ICC Men’s T20 World Cup

It is time!! So last weekend, I turned the wheels, moved the levers and listened to the hiss of steam, as I cranked up my Shiny app GooglyPlusPlus. The ICC Men’s T20 World Cup is just around the corner, and it was time to prepare for this event. This latest GooglyPlusPlus is current with the latest Intl. men’s T20 match data, give or take a few. GooglyPlusPlus can analyze batsmen, bowlers, matches, team-vs-team, team-vs-all teams, besides also ranking batsmen, bowlers and plot performances in Powerplay, middle and death overs.

In this post, I include a quick refresher of some of features of my app GooglyPlusPlus. Note: This is a random sampling of the functions available. There are more than 120+ features available in the app.

Check out your favourite players and your country’s team with GooglyPlusPlus

Note 1: All charts are interactive

Note 2: You can choose a date range for your analysis

Note 3: The data for this app is taken from Cricsheet

  1. T20 Batsman tab

This tab includes functions pertaining to individual batsmen. Functions include Runs vs Deliveries, moving average runs, cumulative average run, cumulative average strike rate, runs against opposition, runs at venue etc.

For e.g.

a) Suryakumar Yadav’s (India) cumulative strike rate

b) Mohammed Rizwan’s (Pakistan) performance against opposition

2. T20 Bowler’s Tab

The bowlers tab has functions for computing mean economy rate, moving average wickets, cumulative average wicks, cumulative economy rate, bowlers performance against opposition, bowlers performance in venue, predict wickets and others

A random function is shown below

a) Predict wickets for Wanindu Hasaranga of Sri Lanka

3. T20 Match tab

The match tab has functions that can compute match batting & bowling scorecard, batting partnerships, batsmen performance vs bowlers, bowler’s wicket kind, bowler’s wicket match, match worm graph, match worm wicket graph, team runs across 20 overs, team wickets in 20 overs, teams runs or wickets in powerplay, middle and death overs

Here are a couple of functions from this tab

a) Afghanistan vs Ireland – 2022-08-15

b) Australia vs Sri Lanka – 2019-11-01 – Runs across 20 overs

4. T20 Head-to-head tab

This tab provides the analysis of all combination of T20 teams (countries) in different aspects. This tab can compute the overall batting, bowling scorecard in all matches between 2 countries, batsmen partnerships, performances against bowlers, bowlers vs batsmen, runs, strike rate, wickets, economy rate across 20 overs, runs vs SR plot and wicket vs ER plot in all matches between team and so on. Here are a couple of examples from this tab

a) Bangladesh vs West Indies – Batting scorecard from 2019-01-01 to 2022-07-07

b) Wickets vs ER plot – England vs New Zealand – 2019-01-01 to 2021-11-10

5. T20 Team performance overall tab

This tab provides detailed analysis of the team’s performance against all other teams. As in the previous tab there are functions to compute the overall batting, bowling scorecard of a team against all other teams for any specific interval of time. This can help in picking out the most consistent batsmen, bowlers. Besides there are functions to compute overall batting partnerships, bowler vs batsmen, runs, wickets across 20 overs, run vs SR and wickets vs ER etc.

a) Batsmen vs Bowlers (Rank 1- V Kohli 2019-01-01 to 2022-09-25)

b) team Runs vs SR in Death overs (India) (2019-01-01 to 2022-09-25)

6) Optimisation tab

In the optimisation tab we can check the performance of a specific batsmen against specific bowlers or bowlers against batsmen

a) Batsmen vs Bowlers

b) Bowlers vs batsmen

7) T20 Batting Performance tab

This tab performs various analytics like ranking batsmen based on Run over SR and SR over Runs. Also you can plot overall Runs vs SR, and more specifically Runs vs SR in Powerplay, Middle and Death overs. All of this can be done for a specific date range. Here are some examples. The data includes all of T20 (all countries all matches)

a) Rank batsmen (Runs over SR, minimum matches played=33, date range=2019-01-01 to 2022-09-27)

The top 3 batsmen are Mohamen Rizwan, V Kohli and Babar Azam

b) Overall runs vs SR plot (2019-01-01 to 2022-09-27)

c) Overall Runs vs SR in Powerplay (all teams- 2019-01-01-2022-09-27)

This plot will be crowded. However, we can zoom into an area of interest. The controls for interacting with the plot are in the top of the plot as shown

Zooming in and panning to the area we can see the best performers in powerplay are as below

8) T20 Bowling Performance tab

This tab computes and ranks bowlers on Wickets over Economy and Economy rate over wickets. We can also compute and plot the Wickets vs ER in all matches , besides the Wickets vs ER in powerplay, middle and death overs with data from all countries

a) Rank Bowlers (Wickets over ER, minimum matches=28, 2019-01-01 to 2022-09-27)

b) Wickets vs ER plot

S Lamichhane (NEP), Hasaranga (SL) and Shamsi (SA) are excellent bowlers with high wickets and low ER as seen in the plot below

c) Wickets vs ER in death overs (2019-01-01 to 2022-09-27, min matches=24)

Zooming in and panning we see the best performers in death overs are MR Adair (IRE), Haris Rauf(PAK) and Chris Jordan (ENG)

With the excitement building up, it is time you checked out how your country will perform and the players who will do well.

Go ahead give GooglyPlusPlus a spin !!!

Also see

  1. Deep Learning from first principles in Python, R and Octave – Part 5
  2. Big Data-5: kNiFi-ing through cricket data with yorkpy
  3. Understanding Neural Style Transfer with Tensorflow and Keras
  4. De-blurring revisited with Wiener filter using OpenCV
  5. Re-introducing cricketr! : An R package to analyze performances of cricketers
  6. Modeling a Car in Android
  7. Presentation on “Intelligent Networks, CAMEL protocol, services & applications”
  8. Practical Machine Learning with R and Python – Part 2
  9. Cricpy adds team analytics to its arsenal!!
  10. Benford’s law meets IPL, Intl. T20 and ODI cricket

To see all posts click Index of posts

Then, Now(IPL 2022), Beyond : Insights from GooglyPlusPlus

IPL 2022 has just concluded and yet again, it is has thrown a lot of promising and potential youngsters in its wake, while established players have fallen! With IPL 2022, we realise that “Sceptre and Crown must tumble down” and that ‘the glories‘ of form and class like everything else are “shadows not substantial things” (Death the Leveller by James Shirley).

So King Kohli had to kneel, and hitman’ himself got hit. Rishabh Pant, Jadeja also had a poor season. On the contrary there were several youngsters who shone like Abhishek Sharma, Tilak Verma, Umran Malik or a Mohsin Khan

This post is about my potential T20 Indian players for the World Cup 2022 and beyond.

The post below includes my own analysis and thoughts. Feel free to try out my Shiny app GooglyPlusPlus and draw your own conclusions.

You can also view the analyais as a youtube video at Insights from GooglyPlusPlus

How often we hear that data by itself is useless, unless we can draw insights from it? This is a prevailing theme in the corporate world and everybody uses all sorts of tools to analyse and subsequently draw insights. Data analysis can be done in many ways as data can be sliced, diced, chopped in a zillion ways. There are many facets and perspectives to analysing data. Creating insights is easy, but arriving at actionable insights is anything but. So, the problem of selecting the best 11 is difficult as there are so many ways to look at the analysis. My Shiny app GooglyPlusPlus based on my R package yorkr can analyse data in several ways namely

  1. Batsman analysis
  2. Bowler analysis
  3. Match analysis
  4. Team vs team analysis
  5. Team vs all teams analysis
  6. Batsman vs bowler and vice versa
  7. Analysis of in 3,4,5 in power play, middle and death overs

GooglyPlusPlus uses my R package yorkr which has ~ 160 functions some which have several options. So, we can say roughly there are ~500 different ways that analysis can be done or in other words we can gather almost roughly 500+ different insights, not to mention that there are so many combinations of head-on matches and one-vs-all matches.

So generating insights or different ways of analysis data alone is not enough. The question is whether we can get a consolidated view from the different insights. In this post, I try to identify the best contenders for the Indian T20 team. This is far more difficult than it looks. Do you select players on past historical performance or do you choose from the newer crop of players, who have excelled in the recent IPL season. I think this boils down the typical situation in any domain. In engineering, we have tradeoffs – processing power vs memory tradeoff, throughput vs latency tradeoff or in the financial domain it is cost vs benefit or risk vs reward tradeoff. For team selection, the quandary is, whether to choose seasoned players with good historical performance but a poor performances in recent times or go with youngsters who have played with great courage and flair in this latest episode of IPL 2022. Hence there is a tradeoff between reliable but below average performance or risky but superlative performances of new players.

For this I base my potential list from

  • Then (past history of batsmen & bowlers) – I have chosen the performance of batsmen and bowlers in the last 3 years. With we can arrive at those who have had reasonably reliable performance for the last 3 years
  • Now (IPL 2022) – Performance in the current season IPL 2022

A. Then (Jan 2020 – May 2022) – Batsmen analysis

In this section I analyse the performances of batsmen and bowlers from Jan 2022 – May 2022. This is done based on ranking, and plots of Runs vs Strike Rate in Power Play, Middle and Death overs

Also I analyse bowlers based on the overall rank from Jan 2022- May 2022. Further more analysis is done on Wickets vs Economy Rate overall and in Power Play, Middle and Death overs

a. Ranks of batsmen (Runs over Strike Rate) : Jan 2020 – May 2022

The top batsmen consistency wise

[KL Rahul, Shikhar Dhawan, Ruturaj Gaikwad, Ishan Kishan, Shubman Gill, Suryakumar Yadav, Sanju Samson, Mayank Agarwal, Prithvi Shaw, Devdutt Padikkal, Nitish Rana, Virat Kohli, Shreyas Iyer, Ambati Rayadu, Rahul Tripathi, Rishabh Pant, Rohit Sharma, Hardik Pandya]

b. Ranks of batsmen (Strike Rate over Runs) : Jan 2020 – May 2022

The most consistent players from the Strike Rate perspective are

The batsmen with best Strike Rate in the last 3 years are

[Dinesh Karthik, Prithvi Shaw, Hardik Pandya, Rishabh Pant, Sanju Samson, Rahul Tripathi, Suryakumar Yadav, Nitish Rana, Mayank Agarwal, Krunal Pandya, MS Dhoni, Shikhar Dhawan, Ishan Kishan, KL Rahul]

c.Best Batsmen Runs vs SR : Jan 2020 – May 2022

The best batsmen should have a reasonable combination of Runs and SR. The best batsmen are

[KL Rahul, Shikhar Dhawan, Ruturaj Gaikwad, Ishan Kishan, Shubman Gill , Sanju Samson, Suryakumar Yadav, Shubman Gill, Mayank Agarwal, Prithvi Shaw, Nitish Rana, Hardik Pandya, Rishabh Pant, Rahul Tripathi,

d. Best batsmen Runs vs SR in Powerplay: Jan 2020 – May 2022

The best players in Power play

The best players in Power play in the last 3 years are

[KL Rahul, Prithvi Shaw, Rohit Sharma, Devdutt Padikkal, Mayank Agarwal, Virat Kohli, Ishan Kishan, Yashashvi Jaiswal, Wriddhiman Saha, Rahul Tripathi, Sanju Samson, Robin Uthappa, Venkatesh Iyer, Nitish Rana,Suryakumar Yadav, Abhishek Sharma Shreyas Iyer ]

e. Best batsmen Runs vs SR in Middleovers: Jan 2020 – May 2022

The most consistent players in the last 3 years in the middle overs are

[KL Rahul, Sanju Samson, Shikhar Dhawan, Rishabh Pant, Nitish Rana, Shreyas Iyer, Shubman Gill, Ishan Kishan, Devdutt Padikkal, Rahul Tripathi, Ruturaj Gaikwad, Shivam Dube, Hardik Pandya]

f. Best batsmen Runs vs SR in Death overs: Jan 2020 – May 2022

The best batsmen in death overs are

[Dinesh Karthik, Ravindra Jadeja, Hardik Pandya, Rahul Tewatia, MS Dhoni, KL Rahul, Rishabh Pant, Suryakumar Yadav, Ambati Rayadu, Virat Kohli, Nitish Rana, Shikhar Dhawan, Ruturaj Gaikwad, Ishan Kishan]

B) Now (IPL 2022) – Batsmen analysis

IPL 2022 just finished and clearly brings out the batsmen who are in great nick. It is always going to be a judgment call of whether to go for ‘old reliable’ or ‘new and awesome’.

a. Ranks of batsmen (Runs over Strike Rate) : IPL 2022

The best batsmen this season in Runs over Strike rate are

The best batsmen are

[KL Rahul, Shikhar Dhawan, Hardik Pandya, Deepak Hooda, Shubman Gill, Rahul Tripathi, Abhishek Sharma, Ishan Kishan, Wriddhiman Saha, Shreyas Iyer, Tilak Verma, Ruturaj Gaikwad, Sanju Samson, Shivam Dube]

b. Ranks of batsmen (Strike Rate over Runs) : IPL 2022

The batsmen with the best strike rate are

[Dinesh Karthik, Rishabh Pant, Rahul Tewathia, Rahul Tripathi, Sanju Samson, R Ashwin, Deepak Hooda, MS Dhoni, Nitish Rana, Riyan Parag, Shreya Iyer]

c.Best Batsmen Runs vs SR :IPL 2022

From an overall performance the following batsmen shone this season

[KL Rahul, Shikhar Dhawan, Shubman Gill, Hardik Pandya, Abhishel Sharma, Deepak Hooda, Rahul Tripathi, Tilak Verma, Shreya Iyer, Nitish Rana, Sanju Samson, Rishabh Pant]

d. Best batsmen Runs vs SR in Powerplay: IPL 2022

Top batsmen in Power play in IPL 2022

[Abhishek Sharma, Shikhar Dhawan, Rohit Sharma, Ishan Kishan, Shubman Gill, Prithvi Shaw, Wriddhiman Saha, Ishan Kishan, KL Rahul, Ruturaj Gaikwad, Virat Kohli, Yashasvi Jaiswal, Mayank Agarwal, Robin Uthappa, Sanju Samson, Nitish Rana]

e. Best batsmen Runs vs SR in Middleovers: IPL 2022

Best batsmen in middle overs in IPL 2022

[Deepak Hooda, Hardik Pandya, Tilak Verma, KL Rahul, Sanju Samson, Rishabh Pant, Shubman Gill, Ambati Rayudu, Suryaprakash Yadav, Shikhar Dhawan, Ruturaj Gaikwad]

f. Best batsmen Runs vs SR in Death overs: IPL 2022

Top batsmen in death overs in IPL 2022

[Dinesh Karthik, Rahul Tewatia, MS Dhoni, KL Rahul, Azar Patel, Washington Sundar, R Ashwin, Hardik Pandya, Ayush Badoni, Shivam Dube, Suryakumar Yadav, Ravindra Jadeja, Sanju Samson]

Overall Batting Performance in season

Kohli peaked in 2016 and from then on it has been a downward slide (see below)

Taking a look at Kohli’s moving average it is clear that he is past his prime and it will take a herculean effort to regain his lost glory

Similarly, Rohit Sharma’s moving average is constantly around ~30 as seen below

The cumulative average of Rohit Sharma is shown below

Comparing KL Rahul, Shikhar Dhawan, Rohit Sharma and V Kohli we see that KL Rahul and Shikhar Dhawan have had a much superior performance in the last 2-3 years. Rohit has averaged about ~25 runs every season.

Comparing the 4 wicket-keeper batsmen Sanju Samson, Rishabh Pant, Ishan Kishan and Dinesh Karthik from 2016

i) Runs over Strike Rate

We see that Pant peaked in 2018 but has not performed as well since. In the last 2 years Sanju Samson and Ishan Kishan have done well

ii) Strike Rate over Runs

For the last couple of seasons Rishabh Pant and Dinesh Kartik top the strike rate over the other 2

Similar analysis can be done other combinations of batsmen

Choosing the best batsmen from the above, my top 5 batsmen would be

  1. KL Rahul
  2. Shikhar Dhawan
  3. Prithvi Shaw, Ruturaj Gaikwad, Ishan Kishan
  4. Sanju Samson, Shreyas Iyer, Shubman Gill, Shivam Dube,
  5. Abhishek Sharma, Tilak Verma, Rahul Tripathi, Suryakumar Yadav, Deepak Hooda
  6. Rishabh Pant, Dinesh Karthik

Personally, I feel Ishan Kishan and Shreyas Iyer are a little tardy while playing express speeds, as compared to Sanju Samson or Rishabh Pant.

If you notice, I have not included both Virat Kohli or Rohit Sharma who have been below par for some time

C. Then (Jan 2020 – May 2022) – Bowler analysis

This section I analyse the performances of bowlers from Jan 2022 – May 2022. This is done based on ranking, and plots of Wickets vs Economy Rate in Power Play, Middle and Death overs

a. Ranks of bowlers (Wickets over Economy Rate) : Jan 2020 – May 2022

The most consistent bowlers Wickets over Economy Rate for the last 3 years are

[YS Chahal, Jasprit Bumrah, Mohammed Dhami, Harshal Patel, Shardul Thakur, Arshdeep Singh, Rahul Chahar, Varun Chakravarthy, Ravi Bishnoi, Prasidh Krishna, R Ashwon, Axar Patel, Mohammed Siraj, Ravindra Jadeja, Krunal Pandya, Rahul Tewatia]

b. Ranks of bowlers (Economy Rate over Wickets) : Jan 2020 – May 2022

The most economical bowlers since 2020 are

[Axar Patel, Krunal Pandya, Jasprit Bumrah, CV Varun, R Ashwin, Ravi Bishnoi, Rahul Chahar, YS Chahal, Ravindra Jadeja, Harshal Patel, Mohammed Shami, Mohammed Siraj, Rahul Tewatia, Arshdeep Singh, Prasidh Krishna, Shardul Thakur]

c.Best Bowlers Wickets vs ER : Jan 2020 – May 2022

The best bowlers Wickets vs ER will be in the bottom right quadrant. The most consistent and reliable bowlers are

[YS Chahal, Jasprit Bumrah, Mohammed Shami, Harshal Patel, CV Arun, Ravi Bishnoi, Rahul Chahar, R Ashwin, Axar Patel]

d. Best bowlers Wickets vs ER in Powerplay: Jan 2020 – May 2022

The best bowlers in Powerplay are

[Mohammed Shami, Deepak Chahar, Mohammed Siraj, Arshdeep Singh, Jasprit Bumrah, Avesh Khan, Mukesh Choudhary, Shardul Thakur, T Natarajan, Bhuvaneshwar Kumar, WashingtonSundar, Shivam Mavi]

e. Best bowlers Wickets vs ER in Middle overs : Jan 2020 – May 2022

The most reliable performers in middle overs from 2020-2022 are

[YS Chahal, Rahul Chahr, Ravi Bishnoi, Harshal Patel, Axar Patel, Jasprit Bumrah, Umran Malik, R Ashwin, Avesh Khan, Shardul Thakur, Kuldeep Yadav]

f. Best bowlers Wickets vs ER in Death overs : Jan 2020 – May 2022

The most reliable bowlers are

[Harshal Patel, Mohammed Shami, Jasprit Bumrah, Arshdeep Singh, T Natarajan, Avesh Khan, Shardul Thakur, Bhuvaneshwar Kumar, Shivam Mavi, YS Chahal, Prasidh Krishna, Mohammed Siraj, Chetan Sakariya]

B) Now (IPL 2022) – Bowler analysis

a. Ranks of bowlers (Wickets over Economy Rate) : IPL 2022

The best bowlers in IPL 2022 when considering Wickets over Economy Rate

[YS Chahal, Umran Malik, Prasidh Krishna, Mohammed Shami, Kuldeep Yadav, Harshal Patel, T Natarajan, Avesh Khan, Shardul Thakur, Mukesh Choudhary, Jasprit Bumrah, Ravi Bishnoi]

a. Ranks of bowlers (Economy Rate over Wickets) : IPL 2022

The most economical bowlers in IPL 2022 are

[Axar Patel, Jasprit Bumrah, Krunal Pandya, Umesh Yadav, Bhuvaneshwar Kumar, Rahul Chahr, Harshal Patel, Arshdeep Singh, R Ashwion, Umran Malik, Kuldeep Yadav, YS Chahal, Mohammed Shami, Avesh Khan, Prasidh Krishna]

c.Best Bowlers Wickets vs ER : IPL 2022

The overall best bowlers in IPL 2022 are

[YS Chahal, Umran Malik, Harshal Patel, Prasidh Krishna, Mohammed Shami, Kuldeep Yadav, Avesh Khan, Jasprit Bumrah, Umesh Yadav, Bhuvaneshwar Kumar, Arshdeep Singh, R Ashwin, Rahul Chahar, Krunal Pandya]

d. Best bowlers Wickets vs ER in Powerplay: IPL 2022

The best bowlers in IPL 2022 in Power play are

[Mukesh Choudhary, Mohammed Shami, Prasidh Krishna, Umesh Yadav, Avesh Khan, Mohsin Khan, T Natarajan, Jasprit Bumrah, Yash Dayal, Mohammed Siraj]

d. Best bowlers Wickets vs ER in Middle overs: IPL 2022

The best bowlers in IPL 2022 during middle overs

The best bowlers are

[YS Chahal, Umran Malik, Kuldeep Yadav, Harshal Patel, Ravi Bishnoi, R Ashwin]

e. Best bowlers Wickets vs ER in Death overs: IPL 2022

The best bowlers in death overs in IPL 2022 are

[T Natarajan, Harshal Patel, Bhuvaneshwar Kumar, Mohammed Shami, Jasprit Bumrah, Shardul Thakur, YS Chahal, Prasidh Krishna, Avesh Khan, Mohsin Khan, Yash Dayal, Umran Malik, Arshdeep Singh]

Typically in a team we would need a combination of 4 bowlers (2 fast & 2 spinner or 3 fast and 1 spinner) with an additional player who is all rounder.

For 4 bowlers we could have

  1. JJ Bumrah
  2. Mohammed Shami, Umran Malik, Bhuvaneshwar Kumar, Umesh Yadav
  3. Arshdeep Singh, Avesh Khan, Mohsin Khan, Harshal Patel
  4. YS Chahal, Ravi Bishnoi, Rahul Chahar, Axar Patel
  5. Ravindra Jadeja, Hardik Pandya, Rahul Tewathia, R Ashwin

i) Performance comparison (Wickets over Economy Rate)

Bumrah had the best season in 2020. He has been doing quite well and has been among the wickets

ii) Performance comparison (Economy Rate over Wickets)

Bumrah has the best Economy Rate

We can do a wicket prediction of bowlers. So for example for Bumrah it is

iii) Performance evaluation (Wickets over Economy Rate)

Harshal Patel followed by Avesh Khan had a good season last year, but Umran Malik pipped them this year (see below)

iv) Performance analysis of spinners

a. Wickets over Economy Rate: 2022

Chahal has the best season followed by Bishnoi and Chahar this season

b) Economy Rate over WIckets

Axar Patel has the best economy rate followed by Rahul Chahar

Conclusion

The above post identified the best candidates for the Indian team in the future and beyond. In my T20 list, I have neither included Virat Kohli or Rohit Sharma. The data in T20 clearly indicates that they have had their days. There is a lot more talent around. The tradeoff is a little risk for a greater potential performance. My list would be

  1. KL Rahul
  2. Shikhar Dhawan
  3. Ruturaj Gaikwad, Prithvi Shaw, Rahul Tripathi
  4. Suryakumar Yadav, Shreyas Iyer, Abhishek Sharma, Deepak Hooda
  5. Sanju Samson (Wicket keeper/captain)/ Rishabh Pant/Dinesh Karthik
  6. Hardik Pandya, Ravindra Jadeja, Rahul Tewathia
  7. Jasprit Bumrah
  8. Mohammed Shami, Bhuvaneshwar Kumar, Umran Malik
  9. Arshdeep Singh, Avesh Khan, Harshal Patel
  10. YS Chahal
  11. Axar Patel, Ravi Bishnoi, Rahul Chahar

You may agree/ disagree with my list. Feel free to do your analysis with GooglyPlusPlus and come to your own conclusions

This analysis is also available on youtube Insights from GooglyPlusPlus

You may also like

  1. Deep Learning from first principles in Python, R and Octave – Part 1
  2. Player Performance Estimation using AI Collaborative Filtering
  3. The mechanics of Convolutional Neural Networks in Tensorflow and Keras
  4. TWS-4: Gossip protocol: Epidemics and rumors to the rescue
  5. Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR
  6. Programming languages in layman’s language
  7. Practical Machine Learning with R and Python – Part 4
  8. Pitching yorkpy…swinging away from the leg stump to IPL – Part 3
  9. Revisiting World Bank data analysis with WDI and gVisMotionChart
  10. Natural language processing: What would Shakespeare say?

To see all posts click Index of posts


Player Performance Estimation using AI Collaborative Filtering

1. Introduction

Often times before crucial matches, or in general, we would like to know the performance of a batsman against a bowler or vice-versa, but we may not have the data. We generally have data where different batsmen would have faced different sets of bowlers with certain performance data like ballsFaced, totalRuns, fours, sixes, strike rate and timesOut. Similarly different bowlers would have performance figures(deliveries, runsConceded, economyRate and wicketTaken) against different sets of batsmen. We will never have the data for all batsmen against all bowlers. However, it would be good estimate the performance of batsmen against a bowler, even though we do not have the performance data. This could be done using collaborative filtering which identifies and computes based on the similarity between batsmen vs bowlers & bowlers vs batsmen.

This post shows an approach whereby we can estimate a batsman’s performance against bowlers even though the batsman may not have faced those bowlers, based on his/her performance against other bowlers. It also estimates the performance of bowlers against batsmen using the same approach. This is based on the recommender algorithm which is used to recommend products to customers based on their rating on other products.

This idea came to me while generating the performance of batsmen vs bowlers & vice-versa for 2 IPL teams in this IPL 2022 with my Shiny app GooglyPlusPlus in the optimization tab, I found that there were some batsmen for which there was no data against certain bowlers, probably because they are playing for the first time in their team or because they were new (see picture below)

In the picture above there is no data for Dewald Brevis against Jasprit Bumrah and YS Chahal. Wouldn’t be great to estimate the performance of Brevis against Bumrah or vice-versa? Can we estimate this performance?

While pondering on this problem, I realized that this problem formulation is similar to the problem formulation for the famous Netflix movie recommendation problem, in which user’s ratings for certain movies are known and based on these ratings, the recommender engine can generate ratings for movies not yet seen.

This post estimates a player’s (batsman/bowler) using the recommender engine This post is based on R package recommenderlab

“Michael Hahsler (2021). recommenderlab: Lab for Developing and Testing Recommender Algorithms. R package version 0.2-7. https://github.com/mhahsler/recommenderlab

Note 1: Thw data for this analysis is taken from Cricsheet after being processed by my R package yorkr.

You can also read this post in RPubs at Player Performance Estimation using AI Collaborative Filtering

A PDF copy of this post is available at Player Performance Estimation using AI Collaborative Filtering.pdf

You can download this R Markdown file and the associated data and perform the analysis yourself using any other recommender engine from Github at playerPerformanceEstimation

Problem statement

In the table below we see a set of bowlers vs a set of batsmen and the number of times the bowlers got these batsmen out.
By knowing the performance of the bowlers against some of the batsmen we can use collaborative filter to determine the missing values. This is done using the recommender engine.

The Recommender Engine works as follows. Let us say that there are feature vectors x^1, x^2 and x^3 for the 3 bowlers which identify the characteristics of these bowlers (“fast”, “lateral drift through the air”, “movement off the pitch”). Let each batsman be identified by parameter vectors \theta^1, \theta^2 and so on

For e.g. consider the following table

Then by assuming an initial estimate for the parameter vector \theta and the feature vector xx we can formulate this as an optimization problem which tries to minimize the error for \theta^T*x This can work very well as the algorithm can determine features which cannot be captured. So for e.g. some particular bowler may have very impressive figures. This could be due to some aspect of the bowling which cannot be captured by the data for e.g. let’s say the bowler uses the ‘scrambled seam’ when he is most effective, with a slightly different arc to the flight. Though the algorithm cannot identify the feature as we know it, but the ML algorithm should pick up intricacies which cannot be captured in data.

Hence the algorithm can be quite effective.

Note: The recommender lab performance is not very good and the Mean Square Error is quite high. Also, the ROC and AUC curves show that not in aLL cases the algorithm is doing a clean job of separating the True positives (TPR) from the False Positives (FPR)

Note: This is similar to the recommendation problem

The collaborative optimization object can be considered as a minimization of both \theta and the features x and can be written as

J(x^{(1)},x^{(2)},..x^{(n_{u})}, \theta^{(1)},\theta^{(2)},..,\theta^{(n_{m})}}= 1/2\sum(\theta^{j})^{T}x^{i}- y^{(i,j)})^{2} + \lambda\sum\sum (x_{k}^{i})^{2} + \lambda\sum\sum (_\theta{k}^{j})^{2}

The collaborative filtering algorithm can be summarized as follows

  1. Initialize \theta^1, \theta^2\theta^{n_{u}} and the set of features be x^1,x^2, … ,x^{n_{m}} to small random values
  2. Minimize J(\theta^1, \theta^2\theta^{n_{u}},x^1, x^2, … ,x^{n_{m}}) using gradient descent. For every
    j=1,2, …n_{u}, i= 1,2,.., n_{m}
  3. x_{k}^{i} := x_{k}^{i}\alpha ( \sigma (\theta^j)^T)x^iy^(i,j)\theta_{k}^{j} + \lambda x_{k}^i

    &

    \theta_{k}^{i} := \theta_{k}^{i}\alpha ( \sigma (\theta^j)^T)x^i - y^(i,j)\theta_{k}^{j} + \lambda x_{k}^i
  4. Hence for a batsman with parameters \theta and a bowler with (learned) features x, predict the “times out” for the player where the value is not known using \theta^Tx

The above derivation for the recommender problem is taken from Machine Learning by Prof Andrew Ng at Coursera from the lecture Collaborative filtering

There are 2 main types of Collaborative Filtering(CF) approaches

  1. User based Collaborative Filtering User-based CF is a memory-based algorithm which tries to mimics word-of-mouth by analyzing rating data from many individuals. The assumption is that users with similar preferences will rate items similarly.
  2. Item based Collaborative Filtering Item-based CF is a model-based approach which produces recommendations based on the relationship between items inferred from the rating matrix. The assumption behind this approach is that users will prefer items that are similar to other items they like.

1a. A note on ROC and Precision-Recall curves

A small note on interpreting ROC & Precision-Recall curves in the post below

ROC Curve: The ROC curve plots the True Positive Rate (TPR) against the False Positive Rate (FPR). Ideally the TPR should increase faster than the FPR and the AUC (area under the curve) should be close to 1

Precision-Recall: The precision-recall curve shows the tradeoff between precision and recall for different threshold. A high area under the curve represents both high recall and high precision, where high precision relates to a low false positive rate, and high recall relates to a low false negative rate

library(reshape2)
library(dplyr)
library(ggplot2)
library(recommenderlab)
library(tidyr)
load("recom_data/batsmenVsBowler20_22.rdata")

2. Define recommender lab helper functions

Helper functions for the RMarkdown notebook are created

  • eval – Gives details of RMSE, MSE and MAE of ML algorithm
  • evalRecomMethods – Evaluates different recommender methods and plot the ROC and Precision-Recall curves
# This function returns the error for the chosen algorithm and also predicts the estimates
# for the given data
eval <- function(data, train1, k1,given1,goodRating1,recomType1="UBCF"){
  set.seed(2022)
  e<- evaluationScheme(data,
                       method = "split",
                       train = train1,
                       k = k1,
                       given = given1,
                       goodRating = goodRating1)
  
  r1 <- Recommender(getData(e, "train"), recomType1)
  print(r1)
  
  p1 <- predict(r1, getData(e, "known"), type="ratings")
  print(p1)
  
  error = calcPredictionAccuracy(p1, getData(e, "unknown"))
  
  print(error)
  p2 <- predict(r1, data, type="ratingMatrix")
  p2
}
# This function will evaluate the different recommender algorithms and plot the AUC and ROC curves
evalRecomMethods <- function(data,k1,given1,goodRating1){
  set.seed(2022)
  e<- evaluationScheme(data,
                       method = "cross",
                       k = k1,
                       given = given1,
                       goodRating = goodRating1)
  
  models_to_evaluate <- list(
    `IBCF Cosinus` = list(name = "IBCF", 
                          param = list(method = "cosine")),
    `IBCF Pearson` = list(name = "IBCF", 
                          param = list(method = "pearson")),
    `UBCF Cosinus` = list(name = "UBCF",
                          param = list(method = "cosine")),
    `UBCF Pearson` = list(name = "UBCF",
                          param = list(method = "pearson")),
    `Zufälliger Vorschlag` = list(name = "RANDOM", param=NULL)
  )
  
  n_recommendations <- c(1, 5, seq(10, 100, 10))
  list_results <- evaluate(x = e, 
                           method = models_to_evaluate, 
                           n = n_recommendations)
  plot(list_results, annotate=c(1,3), legend="bottomright")
  plot(list_results, "prec/rec", annotate=3, legend="topleft")
}

3. Batsman performance estimation

The section below regenerates the performance for batsmen based on incomplete data for the different fields in the data frame namely balls faced, fours, sixes, strike rate, times out. The recommender lab allows one to test several different algorithms all at once namely

  1. User based – Cosine similarity method, Pearson similarity
  2. Item based – Cosine similarity method, Pearson similarity
  3. Popular
  4. Random
  5. SVD and a few others

3a. Batting dataframe

head(df)
##   batsman1         bowler1 ballsFaced totalRuns fours sixes  SR timesOut
## 1 A Badoni        A Mishra          0         0     0     0 NaN        0
## 2 A Badoni        A Nortje          0         0     0     0 NaN        0
## 3 A Badoni         A Zampa          0         0     0     0 NaN        0
## 4 A Badoni     Abdul Samad          0         0     0     0 NaN        0
## 5 A Badoni Abhishek Sharma          0         0     0     0 NaN        0
## 6 A Badoni      AD Russell          0         0     0     0 NaN        0

3b Data set and data preparation

For this analysis the data from Cricsheet has been processed using my R package yorkr to obtain the following 2 data sets – batsmenVsBowler – This dataset will contain the performance of the batsmen against the bowler and will capture a) ballsFaced b) totalRuns c) Fours d) Sixes e) SR f) timesOut – bowlerVsBatsmen – This data set will contain the performance of the bowler against the difference batsmen and will include a) deliveries b) runsConceded c) EconomyRate d) wicketsTaken

Obviously many rows/columns will be empty

This is a large data set and hence I have filtered for the period > Jan 2020 and < Dec 2022 which gives 2 datasets a) batsmanVsBowler20_22.rdata b) bowlerVsBatsman20_22.rdata

I also have 2 other datasets of all batsmen and bowlers in these 2 dataset in the files c) all-batsmen20_22.rds d) all-bowlers20_22.rds

You can download the data and this RMarkdown notebook from Github at PlayerPerformanceEstimation

Feel free to download and analyze the data and use any recommendation engine you choose

3c. Exploratory analysis

Initially an exploratory analysis is done on the data

df3 <- select(df, batsman1,bowler1,timesOut)
df6 <- xtabs(timesOut ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
print(df8[1:10,1:10])
##                 A Mishra A Nortje A Zampa Abdul Samad Abhishek Sharma
## A Badoni              NA       NA      NA          NA              NA
## A Manohar             NA       NA      NA          NA              NA
## A Nortje              NA       NA      NA          NA              NA
## AB de Villiers        NA        4       3          NA              NA
## Abdul Samad           NA       NA      NA          NA              NA
## Abhishek Sharma       NA       NA      NA          NA              NA
## AD Russell             1       NA      NA          NA              NA
## AF Milne              NA       NA      NA          NA              NA
## AJ Finch              NA       NA      NA          NA               3
## AJ Tye                NA       NA      NA          NA              NA
##                 AD Russell AF Milne AJ Tye AK Markram Akash Deep
## A Badoni                NA       NA     NA         NA         NA
## A Manohar               NA       NA     NA         NA         NA
## A Nortje                NA       NA     NA         NA         NA
## AB de Villiers           3       NA      3         NA         NA
## Abdul Samad             NA       NA     NA         NA         NA
## Abhishek Sharma         NA       NA     NA         NA         NA
## AD Russell              NA       NA      6         NA         NA
## AF Milne                NA       NA     NA         NA         NA
## AJ Finch                NA       NA     NA         NA         NA
## AJ Tye                  NA       NA     NA         NA         NA

The dots below represent data for which there is no performance data. These cells need to be estimated by the algorithm

set.seed(2022)
r <- as(df8,"realRatingMatrix")
getRatingMatrix(r)[1:15,1:15]
## 15 x 15 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 15 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                               
## A Badoni         . . . . . . . . . . . . . . .
## A Manohar        . . . . . . . . . . . . . . .
## A Nortje         . . . . . . . . . . . . . . .
## AB de Villiers   . 4 3 . . 3 . 3 . . . 4 3 . .
## Abdul Samad      . . . . . . . . . . . . . . .
## Abhishek Sharma  . . . . . . . . . . . 1 . . .
## AD Russell       1 . . . . . . 6 . . . 3 3 3 .
## AF Milne         . . . . . . . . . . . . . . .
## AJ Finch         . . . . 3 . . . . . . 1 . . .
## AJ Tye           . . . . . . . . . . . 1 . . .
## AK Markram       . . . 3 . . . . . . . . . . .
## AM Rahane        9 . . . . 3 . 3 . . . 3 3 . .
## Anmolpreet Singh . . . . . . . . . . . . . . .
## Anuj Rawat       . . . . . . . . . . . . . . .
## AR Patel         . . . . . . . 1 . . . . . . .
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:15,1:15]
## 15 x 15 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 15 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                              
## AB de Villiers  . 4 3 . . 3 . 3 . . . 4 3 . .
## Abdul Samad     . . . . . . . . . . . . . . .
## Abhishek Sharma . . . . . . . . . . . 1 . . .
## AD Russell      1 . . . . . . 6 . . . 3 3 3 .
## AJ Finch        . . . . 3 . . . . . . 1 . . .
## AM Rahane       9 . . . . 3 . 3 . . . 3 3 . .
## AR Patel        . . . . . . . 1 . . . . . . .
## AT Rayudu       2 . . . . . 1 . . . . 3 . . .
## B Kumar         3 . 3 . . . . . . . . . . 3 .
## BA Stokes       . . . . . . 3 4 . . . 3 . . .
## CA Lynn         . . . . . . . 9 . . . 3 . . .
## CH Gayle        . . . . . 6 . 3 . . . 6 . . .
## CH Morris       . 3 . . . . . . . . . 3 . . .
## D Padikkal      . 4 . . . 3 . . . . . . 3 . .
## DA Miller       . . . . . 3 . . . . . 3 . . .
# Get the summary of the data
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.000   3.000   3.000   3.463   4.000  21.000
# Normalize the data
r0_m <- normalize(r0)
getRatingMatrix(r0_m)[1:15,1:15]
## 15 x 15 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 15 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                                                       
## AB de Villiers   .         -0.7857143 -1.7857143 .  .       -1.7857143
## Abdul Samad      .          .          .         .  .        .        
## Abhishek Sharma  .          .          .         .  .        .        
## AD Russell      -2.6562500  .          .         .  .        .        
## AJ Finch         .          .          .         . -0.03125  .        
## AM Rahane        4.6041667  .          .         .  .       -1.3958333
## AR Patel         .          .          .         .  .        .        
## AT Rayudu       -2.1363636  .          .         .  .        .        
## B Kumar          0.3636364  .          0.3636364 .  .        .        
## BA Stokes        .          .          .         .  .        .        
## CA Lynn          .          .          .         .  .        .        
## CH Gayle         .          .          .         .  .        1.5476190
## CH Morris        .          0.3500000  .         .  .        .        
## D Padikkal       .          0.6250000  .         .  .       -0.3750000
## DA Miller        .          .          .         .  .       -0.7037037
##                                                                              
## AB de Villiers   .         -1.7857143 . . . -0.7857143 -1.785714  .         .
## Abdul Samad      .          .         . . .  .          .         .         .
## Abhishek Sharma  .          .         . . . -1.6000000  .         .         .
## AD Russell       .          2.3437500 . . . -0.6562500 -0.656250 -0.6562500 .
## AJ Finch         .          .         . . . -2.0312500  .         .         .
## AM Rahane        .         -1.3958333 . . . -1.3958333 -1.395833  .         .
## AR Patel         .         -2.3333333 . . .  .          .         .         .
## AT Rayudu       -3.1363636  .         . . . -1.1363636  .         .         .
## B Kumar          .          .         . . .  .          .         0.3636364 .
## BA Stokes       -0.6086957  0.3913043 . . . -0.6086957  .         .         .
## CA Lynn          .          5.3200000 . . . -0.6800000  .         .         .
## CH Gayle         .         -1.4523810 . . .  1.5476190  .         .         .
## CH Morris        .          .         . . .  0.3500000  .         .         .
## D Padikkal       .          .         . . .  .         -0.375000  .         .
## DA Miller        .          .         . . . -0.7037037  .         .         .

4. Create a visual representation of the rating data before and after the normalization

The histograms show the bias in the data is removed after normalization

r0=r[(m=rowCounts(r) > 10),]
getRatingMatrix(r0)[1:15,1:10]
## 15 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                    
## AB de Villiers  . 4 3 . . 3 . 3 . .
## Abdul Samad     . . . . . . . . . .
## Abhishek Sharma . . . . . . . . . .
## AD Russell      1 . . . . . . 6 . .
## AJ Finch        . . . . 3 . . . . .
## AM Rahane       9 . . . . 3 . 3 . .
## AR Patel        . . . . . . . 1 . .
## AT Rayudu       2 . . . . . 1 . . .
## B Kumar         3 . 3 . . . . . . .
## BA Stokes       . . . . . . 3 4 . .
## CA Lynn         . . . . . . . 9 . .
## CH Gayle        . . . . . 6 . 3 . .
## CH Morris       . 3 . . . . . . . .
## D Padikkal      . 4 . . . 3 . . . .
## DA Miller       . . . . . 3 . . . .
#Plot ratings
image(r0, main = "Raw Ratings")
#Plot normalized ratings
r0_m <- normalize(r0)
getRatingMatrix(r0_m)[1:15,1:15]
## 15 x 15 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 15 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                                                       
## AB de Villiers   .         -0.7857143 -1.7857143 .  .       -1.7857143
## Abdul Samad      .          .          .         .  .        .        
## Abhishek Sharma  .          .          .         .  .        .        
## AD Russell      -2.6562500  .          .         .  .        .        
## AJ Finch         .          .          .         . -0.03125  .        
## AM Rahane        4.6041667  .          .         .  .       -1.3958333
## AR Patel         .          .          .         .  .        .        
## AT Rayudu       -2.1363636  .          .         .  .        .        
## B Kumar          0.3636364  .          0.3636364 .  .        .        
## BA Stokes        .          .          .         .  .        .        
## CA Lynn          .          .          .         .  .        .        
## CH Gayle         .          .          .         .  .        1.5476190
## CH Morris        .          0.3500000  .         .  .        .        
## D Padikkal       .          0.6250000  .         .  .       -0.3750000
## DA Miller        .          .          .         .  .       -0.7037037
##                                                                              
## AB de Villiers   .         -1.7857143 . . . -0.7857143 -1.785714  .         .
## Abdul Samad      .          .         . . .  .          .         .         .
## Abhishek Sharma  .          .         . . . -1.6000000  .         .         .
## AD Russell       .          2.3437500 . . . -0.6562500 -0.656250 -0.6562500 .
## AJ Finch         .          .         . . . -2.0312500  .         .         .
## AM Rahane        .         -1.3958333 . . . -1.3958333 -1.395833  .         .
## AR Patel         .         -2.3333333 . . .  .          .         .         .
## AT Rayudu       -3.1363636  .         . . . -1.1363636  .         .         .
## B Kumar          .          .         . . .  .          .         0.3636364 .
## BA Stokes       -0.6086957  0.3913043 . . . -0.6086957  .         .         .
## CA Lynn          .          5.3200000 . . . -0.6800000  .         .         .
## CH Gayle         .         -1.4523810 . . .  1.5476190  .         .         .
## CH Morris        .          .         . . .  0.3500000  .         .         .
## D Padikkal       .          .         . . .  .         -0.375000  .         .
## DA Miller        .          .         . . . -0.7037037  .         .         .
image(r0_m, main = "Normalized Ratings")
set.seed(1234)
hist(getRatings(r0), breaks=25)
hist(getRatings(r0_m), breaks=25)

4a. Data for analysis

The data frame of the batsman vs bowlers from the period 2020 -2022 is read as a dataframe. To remove rows with very low number of ratings(timesOut, SR, Fours, Sixes etc), the rows are filtered so that there are at least more 10 values in the row. For the player estimation the dataframe is converted into a wide-format as a matrix (m x n) of batsman x bowler with each of the columns of the dataframe i.e. timesOut, SR, fours or sixes. These different matrices can be considered as a rating matrix for estimation.

A similar approach is taken for estimating bowler performance. Here a wide form matrix (m x n) of bowler x batsman is created for each of the columns of deliveries, runsConceded, ER, wicketsTaken

5. Batsman’s times Out

The code below estimates the number of times the batsmen would lose his/her wicket to the bowler. As discussed in the algorithm above, the recommendation engine will make an initial estimate features for the bowler and an initial estimate for the parameter vector for the batsmen. Then using gradient descent the recommender engine will determine the feature and parameter values such that the over Mean Squared Error is minimum

From the plot for the different algorithms it can be seen that UBCF performs the best. However the AUC & ROC curves are not optimal and the AUC> 0.5

df3 <- select(df, batsman1,bowler1,timesOut)
df6 <- xtabs(timesOut ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
# Filter only rows where the row count is > 10
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                    
## AB de Villiers  . 4 3 . . 3 . 3 . .
## Abdul Samad     . . . . . . . . . .
## Abhishek Sharma . . . . . . . . . .
## AD Russell      1 . . . . . . 6 . .
## AJ Finch        . . . . 3 . . . . .
## AM Rahane       9 . . . . 3 . 3 . .
## AR Patel        . . . . . . . 1 . .
## AT Rayudu       2 . . . . . 1 . . .
## B Kumar         3 . 3 . . . . . . .
## BA Stokes       . . . . . . 3 4 . .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.000   3.000   3.000   3.463   4.000  21.000
# Evaluate the different plotting methods
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))
#Evaluate the error
a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 70 users.
## 18 x 145 rating matrix of class 'realRatingMatrix' with 1755 ratings.
##     RMSE      MSE      MAE 
## 2.069027 4.280872 1.496388
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
m=as(c,"data.frame")
names(m) =c("batsman","bowler","TimesOut")

6. Batsman’s Strike rate

This section deals with the Strike rate of batsmen versus bowlers and estimates the values for those where the data is incomplete using UBCF method.

Even here all the algorithms do not perform too efficiently. I did try out a few variations but could not lower the error (suggestions welcome!!)

df3 <- select(df, batsman1,bowler1,SR)
df6 <- xtabs(SR ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                                                           
## AB de Villiers   96.8254 171.4286  33.33333  . 66.66667 223.07692   .     
## Abdul Samad       .      228.0000   .        .  .       100.00000   .     
## Abhishek Sharma 150.0000   .        .        .  .        66.66667   .     
## AD Russell      111.4286   .        .        .  .         .         .     
## AJ Finch        250.0000 116.6667   .        . 50.00000  85.71429 112.5000
## AJ Tye            .        .        .        .  .         .       100.0000
## AK Markram        .        .        .       50  .         .         .     
## AM Rahane       121.1111   .        .        .  .       113.82979 117.9487
## AR Patel        183.3333   .      200.00000  .  .       433.33333   .     
## AT Rayudu       126.5432 200.0000 122.22222  .  .       105.55556   .     
##                                
## AB de Villiers  109.52381 .   .
## Abdul Samad       .       .   .
## Abhishek Sharma   .       .   .
## AD Russell      195.45455 .   .
## AJ Finch          .       .   .
## AJ Tye            .       .   .
## AK Markram        .       .   .
## AM Rahane        33.33333 . 200
## AR Patel        171.42857 .   .
## AT Rayudu       204.76190 .   .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   5.882  85.714 116.667 128.529 160.606 600.000
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))
a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 105 users.
## 27 x 145 rating matrix of class 'realRatingMatrix' with 3220 ratings.
##       RMSE        MSE        MAE 
##   77.71979 6040.36508   58.58484
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
n=as(c,"data.frame")
names(n) =c("batsman","bowler","SR")

7. Batsman’s Sixes

The snippet of code estimes the sixes of the batsman against bowlers. The ROC and AUC curve for UBCF looks a lot better here, as it significantly greater than 0.5

df3 <- select(df, batsman1,bowler1,sixes)
df6 <- xtabs(sixes ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                      
## AB de Villiers  3 3 . . . 18 .  3 . .
## AD Russell      3 . . . .  . . 12 . .
## AJ Finch        2 . . . .  . .  . . .
## AM Rahane       7 . . . .  3 1  . . .
## AR Patel        4 . 3 . .  6 .  1 . .
## AT Rayudu       5 2 . . .  . .  1 . .
## BA Stokes       . . . . .  . .  . . .
## CA Lynn         . . . . .  . .  9 . .
## CH Gayle       17 . . . . 17 .  . . .
## CH Morris       . . 3 . .  . .  . . .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    1.00    3.00    3.00    4.68    6.00   33.00
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))
## Timing stopped at: 0.003 0 0.002
a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 52 users.
## 14 x 145 rating matrix of class 'realRatingMatrix' with 1634 ratings.
##      RMSE       MSE       MAE 
##  3.529922 12.460350  2.532122
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
o=as(c,"data.frame")
names(o) =c("batsman","bowler","Sixes")

8. Batsman’s Fours

The code below estimates 4s for the batsmen

df3 <- select(df, batsman1,bowler1,fours)
df6 <- xtabs(fours ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                      
## AB de Villiers   . 1 . . . 24 . 3 . .
## Abhishek Sharma  . . . . .  . . . . .
## AD Russell       1 . . . .  . . 9 . .
## AJ Finch         . 1 . . .  3 2 . . .
## AK Markram       . . . . .  . . . . .
## AM Rahane       11 . . . .  8 7 . . 3
## AR Patel         . . . . .  . . 3 . .
## AT Rayudu       11 2 3 . .  6 . 6 . .
## BA Stokes        1 . . . .  . . . . .
## CA Lynn          . . . . .  . . 6 . .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.000   3.000   4.000   6.339   9.000  55.000
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))
## Timing stopped at: 0.008 0 0.008
## Warning in .local(x, method, ...): 
##   Recommender 'UBCF Pearson' has failed and has been removed from the results!
a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 67 users.
## 17 x 145 rating matrix of class 'realRatingMatrix' with 2083 ratings.
##      RMSE       MSE       MAE 
##  5.486661 30.103447  4.060990
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
p=as(c,"data.frame")
names(p) =c("batsman","bowler","Fours")

9. Batsman’s Total Runs

The code below estimates the total runs that would have scored by the batsman against different bowlers

df3 <- select(df, batsman1,bowler1,totalRuns)
df6 <- xtabs(totalRuns ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                          
## A Badoni         .  . . . .   . .   . . .
## A Manohar        .  . . . .   . .   . . .
## A Nortje         .  . . . .   . .   . . .
## AB de Villiers  61 36 3 . 6 261 .  69 . .
## Abdul Samad      . 57 . . .  12 .   . . .
## Abhishek Sharma  3  . . . .   6 .   . . .
## AD Russell      39  . . . .   . . 129 . .
## AF Milne         .  . . . .   . .   . . .
## AJ Finch        15  7 . . 3  18 9   . . .
## AJ Tye           .  . . . .   . 4   . . .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    1.00    9.00   24.00   41.36   54.00  452.00
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given1=7,goodRating1=median(getRatings(r0)))
a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 105 users.
## 27 x 145 rating matrix of class 'realRatingMatrix' with 3256 ratings.
##       RMSE        MSE        MAE 
##   41.50985 1723.06788   29.52958
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
q=as(c,"data.frame")
names(q) =c("batsman","bowler","TotalRuns")

10. Batsman’s Balls Faced

The snippet estimates the balls faced by batsmen versus bowlers

df3 <- select(df, batsman1,bowler1,ballsFaced)
df6 <- xtabs(ballsFaced ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Mishra', 'A Nortje', 'A Zampa' ... ]]
##                                         
## A Badoni         .  . . . .   . .  . . .
## A Manohar        .  . . . .   . .  . . .
## A Nortje         .  . . . .   . .  . . .
## AB de Villiers  63 21 9 . 9 117 . 63 . .
## Abdul Samad      . 25 . . .  12 .  . . .
## Abhishek Sharma  2  . . . .   9 .  . . .
## AD Russell      35  . . . .   . . 66 . .
## AF Milne         .  . . . .   . .  . . .
## AJ Finch         6  6 . . 6  21 8  . . .
## AJ Tye           .  . . . .   9 4  . . .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    1.00    9.00   18.00   30.21   39.00  384.00
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))
a=eval(r0[1:dim(r0)[1]],0.8, k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 112 users.
## 28 x 145 rating matrix of class 'realRatingMatrix' with 3378 ratings.
##       RMSE        MSE        MAE 
##   33.91251 1150.05835   23.39439
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
r=as(c,"data.frame")
names(r) =c("batsman","bowler","BallsFaced")

11. Generate the Batsmen Performance Estimate

This code generates the estimated dataframe with known and ‘predicted’ values

a1=merge(m,n,by=c("batsman","bowler"))
a2=merge(a1,o,by=c("batsman","bowler"))
a3=merge(a2,p,by=c("batsman","bowler"))
a4=merge(a3,q,by=c("batsman","bowler"))
a5=merge(a4,r,by=c("batsman","bowler"))
a6= select(a5, batsman,bowler,BallsFaced,TotalRuns,Fours, Sixes, SR,TimesOut)
head(a6)
##          batsman          bowler BallsFaced TotalRuns Fours Sixes  SR TimesOut
## 1 AB de Villiers        A Mishra         94       124     7     5 144        5
## 2 AB de Villiers        A Nortje         26        42     4     3 148        3
## 3 AB de Villiers         A Zampa         28        42     5     7 106        4
## 4 AB de Villiers Abhishek Sharma         22        28     0    10 136        5
## 5 AB de Villiers      AD Russell         70       135    14    12 207        4
## 6 AB de Villiers        AF Milne         31        45     6     6 130        3

12. Bowler analysis

Just like the batsman performance estimation we can consider the bowler’s performances also for estimation. Consider the following table

As in the batsman analysis, for every batsman a set of features like (“strong backfoot player”, “360 degree player”,“Power hitter”) can be estimated with a set of initial values. Also every bowler will have an associated parameter vector θθ. Different bowlers will have performance data for different set of batsmen. Based on the initial estimate of the features and the parameters, gradient descent can be used to minimize actual values {for e.g. wicketsTaken(ratings)}.

load("recom_data/bowlerVsBatsman20_22.rdata")

12a. Bowler dataframe

Inspecting the bowler dataframe

head(df2)
##    bowler1        batsman1 balls runsConceded       ER wicketTaken
## 1 A Mishra        A Badoni     0            0 0.000000           0
## 2 A Mishra       A Manohar     0            0 0.000000           0
## 3 A Mishra        A Nortje     0            0 0.000000           0
## 4 A Mishra  AB de Villiers    63           61 5.809524           0
## 5 A Mishra     Abdul Samad     0            0 0.000000           0
## 6 A Mishra Abhishek Sharma     2            3 9.000000           0
names(df2)
## [1] "bowler1"      "batsman1"     "balls"        "runsConceded" "ER"          
## [6] "wicketTaken"

13. Balls bowled by bowler

The below section estimates the balls bowled for each bowler. We can see that UBCF Pearson and UBCF Cosine both perform well

df3 <- select(df2, bowler1,batsman1,balls)
df6 <- xtabs(balls ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Badoni', 'A Manohar', 'A Nortje' ... ]]
##                                          
## A Mishra        . . .  63  .  2 35 .  6 .
## A Nortje        . . .  21 25  .  . .  6 .
## A Zampa         . . .   9  .  .  . .  . .
## Abhishek Sharma . . .   9  .  .  . .  6 .
## AD Russell      . . . 117 12  9  . . 21 9
## AF Milne        . . .   .  .  .  . .  8 4
## AJ Tye          . . .  63  .  . 66 .  . .
## Akash Deep      . . .   .  .  .  . .  . .
## AR Patel        . . . 188  5  1 84 . 29 5
## Arshdeep Singh  . . .   6  6 24 18 . 12 .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    1.00    9.00   18.00   29.61   36.00  384.00
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))
a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 96 users.
## 24 x 195 rating matrix of class 'realRatingMatrix' with 3954 ratings.
##      RMSE       MSE       MAE 
##  30.72284 943.89294  19.89204
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
s=as(c,"data.frame")
names(s) =c("bowler","batsman","BallsBowled")

14. Runs conceded by bowler

This section estimates the runs conceded by the bowler. The UBCF Cosinus algorithm performs the best with TPR increasing fastewr than FPR

df3 <- select(df2, bowler1,batsman1,runsConceded)
df6 <- xtabs(runsConceded ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Badoni', 'A Manohar', 'A Nortje' ... ]]
##                                            
## A Mishra        . . .  61  .  3  41 . 15  .
## A Nortje        . . .  36 57  .   . .  8  .
## A Zampa         . . .   3  .  .   . .  .  .
## Abhishek Sharma . . .   6  .  .   . .  3  .
## AD Russell      . . . 276 12  6   . . 21  .
## AF Milne        . . .   .  .  .   . . 10  4
## AJ Tye          . . .  69  .  . 138 .  .  .
## Akash Deep      . . .   .  .  .   . .  .  .
## AR Patel        . . . 205  5  . 165 . 33 13
## Arshdeep Singh  . . .  18  3 51  51 .  6  .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    1.00    9.00   24.00   41.34   54.00  458.00
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))
## Timing stopped at: 0.004 0 0.004
## Warning in .local(x, method, ...): 
##   Recommender 'UBCF Pearson' has failed and has been removed from the results!
a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 95 users.
## 24 x 195 rating matrix of class 'realRatingMatrix' with 3820 ratings.
##       RMSE        MSE        MAE 
##   43.16674 1863.36749   30.32709
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
t=as(c,"data.frame")
names(t) =c("bowler","batsman","RunsConceded")

15. Economy Rate of the bowler

This section computes the economy rate of the bowler. The performance is not all that good

df3 <- select(df2, bowler1,batsman1,ER)
df6 <- xtabs(ER ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Badoni', 'A Manohar', 'A Nortje' ... ]]
##                                                                       
## A Mishra        . . .  5.809524  .     9.00  7.028571 . 15.000000  .  
## A Nortje        . . . 10.285714 13.68  .     .        .  8.000000  .  
## A Zampa         . . .  2.000000  .     .     .        .  .         .  
## Abhishek Sharma . . .  4.000000  .     .     .        .  3.000000  .  
## AD Russell      . . . 14.153846  6.00  4.00  .        .  6.000000  .  
## AF Milne        . . .  .         .     .     .        .  7.500000  6.0
## AJ Tye          . . .  6.571429  .     .    12.545455 .  .         .  
## Akash Deep      . . .  .         .     .     .        .  .         .  
## AR Patel        . . .  6.542553  6.00  .    11.785714 .  6.827586 15.6
## Arshdeep Singh  . . . 18.000000  3.00 12.75 17.000000 .  3.000000  .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.3529  5.2500  7.1126  7.8139  9.8000 36.0000
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))
## Timing stopped at: 0.003 0 0.004
## Warning in .local(x, method, ...): 
##   Recommender 'UBCF Pearson' has failed and has been removed from the results!
a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 95 users.
## 24 x 195 rating matrix of class 'realRatingMatrix' with 3839 ratings.
##      RMSE       MSE       MAE 
##  4.380680 19.190356  3.316556
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
u=as(c,"data.frame")
names(u) =c("bowler","batsman","EconomyRate")

16. Wickets Taken by bowler

The code below computes the wickets taken by the bowler versus different batsmen

df3 <- select(df2, bowler1,batsman1,wicketTaken)
df6 <- xtabs(wicketTaken ~ ., df3)
df7 <- as.data.frame.matrix(df6)
df8 <- data.matrix(df7)
df8[df8 == 0] <- NA
r <- as(df8,"realRatingMatrix")
r0=r[(rowCounts(r) > 10),]
getRatingMatrix(r0)[1:10,1:10]
## 10 x 10 sparse Matrix of class "dgCMatrix"
##    [[ suppressing 10 column names 'A Badoni', 'A Manohar', 'A Nortje' ... ]]
##                                   
## A Mishra       . . . . . . 1 . . .
## A Nortje       . . . 4 . . . . . .
## A Zampa        . . . 3 . . . . . .
## AD Russell     . . . 3 . . . . . .
## AJ Tye         . . . 3 . . 6 . . .
## AR Patel       . . . 4 . 1 3 . 1 1
## Arshdeep Singh . . . 3 . . 3 . . .
## AS Rajpoot     . . . . . . 3 . . .
## Avesh Khan     . . . . . . 1 . 3 .
## B Kumar        . . . 9 . . 3 . 1 .
summary(getRatings(r0))
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##   1.000   3.000   3.000   3.423   3.000  21.000
evalRecomMethods(r0[1:dim(r0)[1]],k1=5,given=7,goodRating1=median(getRatings(r0)))
## Timing stopped at: 0.003 0 0.003
## Warning in .local(x, method, ...): 
##   Recommender 'UBCF Pearson' has failed and has been removed from the results!
a=eval(r0[1:dim(r0)[1]],0.8,k1=5,given1=7,goodRating1=median(getRatings(r0)),"UBCF")
## Recommender of type 'UBCF' for 'realRatingMatrix' 
## learned using 64 users.
## 16 x 195 rating matrix of class 'realRatingMatrix' with 1908 ratings.
##     RMSE      MSE      MAE 
## 2.672677 7.143203 1.956934
b=round(as(a,"matrix")[1:10,1:10])
c <- as(b,"realRatingMatrix")
v=as(c,"data.frame")
names(v) =c("bowler","batsman","WicketTaken")

17. Generate the Bowler Performance estmiate

The entire dataframe is regenerated with known and ‘predicted’ values

r1=merge(s,t,by=c("bowler","batsman"))
r2=merge(r1,u,by=c("bowler","batsman"))
r3=merge(r2,v,by=c("bowler","batsman"))
r4= select(r3,bowler, batsman, BallsBowled,RunsConceded,EconomyRate, WicketTaken)
head(r4)
##     bowler         batsman BallsBowled RunsConceded EconomyRate WicketTaken
## 1 A Mishra  AB de Villiers         102          144           8           4
## 2 A Mishra     Abdul Samad          13           20           7           4
## 3 A Mishra Abhishek Sharma          14           26           8           2
## 4 A Mishra      AD Russell          47           85           9           3
## 5 A Mishra        AJ Finch          45           61          11           4
## 6 A Mishra          AJ Tye          14           20           5           4

18. Conclusion

This post showed an approach for performing the Batsmen Performance Estimate & Bowler Performance Estimate. The performance of the recommender engine could have been better. In any case, I think this approach will work for player estimation provided the recommender algorithm is able to achieve a high degree of accuracy. This will be a good way to estimate as the algorithm will be able to determine features and nuances of batsmen and bowlers which cannot be captured by data.

References

  1. Recommender Systems – Machine Learning by Prof Andrew Ng
  2. recommenderlab: A Framework for Developing and Testing Recommendation Algorithms
  3. ROC 
  4. Precision-Recall

Also see

  1. Big Data 7: yorkr waltzes with Apache NiFi
  2. Benford’s law meets IPL, Intl. T20 and ODI cricket
  3. Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket
  4. IPL 2022: Near real-time analytics with GooglyPlusPlus!!!
  5. Sixer
  6. Introducing cricpy:A python package to analyze performances of cricketers
  7. The Clash of the Titans in Test and ODI cricket
  8. Cricketr adds team analytics to its repertoire!!!
  9. Informed choices through Machine Learning – Analyzing Kohli, Tendulkar and Dravid
  10. Big Data 6: The T20 Dance of Apache NiFi and yorkpy

To see all posts click Index of posts