Optimal Deep Learning model selection using wandb.ai

In this post I use Weights and Biases’ wandb.ai ‘sweep’ feature, to automatically select the best Deep Learning model out of a set of models created through Grid Search. I chanced upon the Weights and Biases site when I was training and fine-tuning the T5 transformer model, on Kaggle, for my post GenerativeAI:Using T5 Transformer model to summarise Indian Philosophy. During this process Kaggle had requested for a token from wandb.ai.

Out of curiosity, I started to explore this Weights and Biases (W&B) machine learning site and was impressed with the visualisation capabilities of this site. So I decided to give weights and biases a try. It is quite interesting to see the live visualisation features of the site and it is becomes very easy to select the optimal model when we are trying to do a Grid search or Random search through a combination of hyper-parameters.

For this purpose, I used my processed T20 match dataset which I had used to compute the Win Probability of T20 teams. For more details please see my post GooglyPlusPlus: Win Probability using Deep Learning and player embeddings

Searching through high dimensional hyperparameter spaces to find the most performant model can quickly get unwieldy. Hyperparameter sweeps provide an organised and efficient way to automatically search through combinations of hyperparameter values (e.g. learning rate, batch size, epochs, dropout, optimizer type) to find the most optimal values.

Here are the steps

a) Install, import

!pip install wandb -qU
import wandb
from wandb.keras import WandbCallback
wandb.login()
import pandas as pd
import numpy as np
from zipfile import ZipFile
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from pathlib import Path
import matplotlib.pyplot as plt

b) Load the dataset

import pandas as pd
import numpy as np
from zipfile import ZipFile
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers

df1=pd.read_csv('t20.csv')
print("Shape of dataframe=",df1.shape)

train_dataset = df1.sample(frac=0.8,random_state=0)
test_dataset = df1.drop(train_dataset.index)
train_dataset1 = train_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_dataset1 = test_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
train_dataset1
train_labels = train_dataset.pop('isWinner')
test_labels = test_dataset.pop('isWinner')
train_dataset1

a=train_dataset1.describe()
stats=a.transpose
a

Shape of dataframe= (1359888, 10)
batsmanIdx	bowlerIdx	ballNum	ballsRemaining	runs	runRate	numWickets	runsMomentum	perfIndex
count	1.087910e+06	1.087910e+06	1.087910e+06	1.087910e+06	1.087910e+06	1.087910e+06	1.087910e+06	1.087910e+06	1.087910e+06
mean	2.561058e+03	1.939449e+03	1.185352e+02	6.001942e+01	8.110290e+01	1.611611e+00	2.604912e+00	2.886850e-01	9.619675e+00
std	1.479446e+03	1.095097e+03	6.934078e+01	3.514725e+01	4.977998e+01	2.983874e+00	2.195410e+00	6.066070e-01	4.602859e+00
min	1.000000e+00	1.000000e+00	1.000000e+00	1.000000e+00	-5.000000e+00	-5.000000e+00	0.000000e+00	3.571429e-02	0.000000e+00
25%	1.230000e+03	9.400000e+02	5.900000e+01	3.000000e+01	4.100000e+01	1.043478e+00	1.000000e+00	1.058824e-01	6.539326e+00
50%	2.492000e+03	1.919000e+03	1.170000e+02	5.900000e+01	7.800000e+01	1.300000e+00	2.000000e+00	1.408451e-01	9.246753e+00
75%	3.868000e+03	2.884000e+03	1.770000e+02	9.000000e+01	1.170000e+02	1.590312e+00	4.000000e+00	2.352941e-01	1.218349e+01
max	5.226000e+03	3.848000e+03	2.860000e+02	1.610000e+02	2.780000e+02	2.510000e+02	1.000000e+01	1.100000e+01	6.600000e+01

c) Define the Deep Learning model

import pandas as pd
import numpy as np
from keras.layers import Input, Embedding, Flatten, Dense
from keras.models import Model
from keras.layers import Input, Embedding, Flatten, Dense, Reshape, Concatenate, Dropout
from keras.models import Model

tf.random.set_seed(432)
# create input layers for each of the predictors
batsmanIdx_input = Input(shape=(1,), name='batsmanIdx')
bowlerIdx_input = Input(shape=(1,), name='bowlerIdx')
ballNum_input = Input(shape=(1,), name='ballNum')
ballsRemaining_input = Input(shape=(1,), name='ballsRemaining')
runs_input = Input(shape=(1,), name='runs')
runRate_input = Input(shape=(1,), name='runRate')
numWickets_input = Input(shape=(1,), name='numWickets')
runsMomentum_input = Input(shape=(1,), name='runsMomentum')
perfIndex_input = Input(shape=(1,), name='perfIndex')

# Set the embedding size
no_of_unique_batman=len(df1["batsmanIdx"].unique())
print(no_of_unique_batman)
no_of_unique_bowler=len(df1["bowlerIdx"].unique())
print(no_of_unique_bowler)
embedding_size_bat = no_of_unique_batman ** (1/4)
embedding_size_bwl = no_of_unique_bowler ** (1/4)


# create embedding layer for the categorical predictor
batsmanIdx_embedding = Embedding(input_dim=no_of_unique_batman+1, output_dim=16,input_length=1)(batsmanIdx_input)
batsmanIdx_flatten = Flatten()(batsmanIdx_embedding)
bowlerIdx_embedding = Embedding(input_dim=no_of_unique_bowler+1, output_dim=16,input_length=1)(bowlerIdx_input)
bowlerIdx_flatten = Flatten()(bowlerIdx_embedding)

# concatenate all the predictors
x = keras.layers.concatenate([batsmanIdx_flatten,bowlerIdx_flatten, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input])

# add hidden layers
x = Dense(64, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(16, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(8, activation='relu')(x)
x = Dropout(0.1)(x)
# add output layer
output = Dense(1, activation='sigmoid', name='output')(x)
print(output.shape)
# create model

# Initialize a new W&B run
#run = wandb.init(project='t20', group='cricket')

model = Model(inputs=[batsmanIdx_input,bowlerIdx_input, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input], outputs=output)
model.summary()

# Initialize a new W&B run
run = wandb.init(project='t20', group='cricket')
wandb.init(
    # set the wandb project where this run will be logged
    project="t20",

    # track hyperparameters and run metadata
    config={
    "learning_rate": 0.02,
    "dropout": 0.01,
    "batch_size": 1024,
    "epochs": 5,
    }
)


5226
3848
(None, 1)
Model: "model"
__________________________________________________________________________________________________
 Layer (type)                Output Shape                 Param #   Connected to                  
==================================================================================================
 batsmanIdx (InputLayer)     [(None, 1)]                  0         []                            
                                                                                                  
 bowlerIdx (InputLayer)      [(None, 1)]                  0         []                            
                                                                                                  
 embedding (Embedding)       (None, 1, 16)                83632     ['batsmanIdx[0][0]']          
                                                                                                  
 embedding_1 (Embedding)     (None, 1, 16)                61584     ['bowlerIdx[0][0]']           
                                                                                                  
 flatten (Flatten)           (None, 16)                   0         ['embedding[0][0]']           
                                                                                                  
 flatten_1 (Flatten)         (None, 16)                   0         ['embedding_1[0][0]']         
                                                                                                  
 ballNum (InputLayer)        [(None, 1)]                  0         []                            
                                                                                                  
 ballsRemaining (InputLayer  [(None, 1)]                  0         []                            
 )                                                                                                
                                                                                                  
 runs (InputLayer)           [(None, 1)]                  0         []                            
                                                                                                  
 runRate (InputLayer)        [(None, 1)]                  0         []                            
                                                                                                  
 numWickets (InputLayer)     [(None, 1)]                  0         []                            
                                                                                                  
 runsMomentum (InputLayer)   [(None, 1)]                  0         []                            
                                                                                                  
 perfIndex (InputLayer)      [(None, 1)]                  0         []                            
                                                                                                  
 concatenate (Concatenate)   (None, 39)                   0         ['flatten[0][0]',             
                                                                     'flatten_1[0][0]',           
                                                                     'ballNum[0][0]',             
                                                                     'ballsRemaining[0][0]',      
                                                                     'runs[0][0]',                
                                                                     'runRate[0][0]',             
                                                                     'numWickets[0][0]',          
                                                                     'runsMomentum[0][0]',        
                                                                     'perfIndex[0][0]']           
                                                                                                  
 dense (Dense)               (None, 64)                   2560      ['concatenate[0][0]']         
                                                                                                  
 dropout (Dropout)           (None, 64)                   0         ['dense[0][0]']               
                                                                                                  
 dense_1 (Dense)             (None, 32)                   2080      ['dropout[0][0]']             
                                                                                                  
 dropout_1 (Dropout)         (None, 32)                   0         ['dense_1[0][0]']             
                                                                                                  
 dense_2 (Dense)             (None, 16)                   528       ['dropout_1[0][0]']           
                                                                                                  
 dropout_2 (Dropout)         (None, 16)                   0         ['dense_2[0][0]']             
                                                                                                  
 dense_3 (Dense)             (None, 8)                    136       ['dropout_2[0][0]']           
                                                                                                  
 dropout_3 (Dropout)         (None, 8)                    0         ['dense_3[0][0]']             
                                                                                                  
 output (Dense)              (None, 1)                    9         ['dropout_3[0][0]']           
                                                                                                  
==================================================================================================
Total params: 150529 (588.00 KB)
Trainable params: 150529 (588.00 KB)
Non-trainable params: 0 (0.00 Byte)

d) Create  a Training script

def get_optimizer(lr=1e-2, optimizer="adam"):
    "Select optmizer between adam and sgd with momentum"
    if optimizer.lower() == "adam":
        return tf.keras.optimizers.Adam(learning_rate=lr)
    if optimizer.lower() == "sgd":
        return tf.keras.optimizers.SGD(learning_rate=lr, momentum=0.1)

def train(model, batch_size=1024, epochs=10, lr=1e-2, optimizer='adam', log_freq=10):

    # Compile model like you usually do.
    tf.keras.backend.clear_session()
    model.compile(loss="binary_crossentropy",
                  optimizer=get_optimizer(lr, optimizer),
                  metrics=["accuracy"])

    # callback setup
    cbs = [WandbCallback(data_type='auto', log_batch_frequency=None)]

    # train the model
    history=model.fit([train_dataset1['batsmanIdx'],train_dataset1['bowlerIdx'],train_dataset1['ballNum'],train_dataset1['ballsRemaining'],train_dataset1['runs'],
           train_dataset1['runRate'],train_dataset1['numWickets'],train_dataset1['runsMomentum'],train_dataset1['perfIndex']], train_labels, epochs=epochs, batch_size=batch_size,callbacks=cbs,
          validation_data = ([test_dataset1['batsmanIdx'],test_dataset1['bowlerIdx'],test_dataset1['ballNum'],test_dataset1['ballsRemaining'],test_dataset1['runs'],
           test_dataset1['runRate'],test_dataset1['numWickets'],test_dataset1['runsMomentum'],test_dataset1['perfIndex']],test_labels), verbose=1)

e) Define the sweep for Grid Search

#Grid search
sweep_config = {
    'method': 'grid'
    }

metric = {
    'name': 'val_loss',
    'goal': 'minimize'
    }

sweep_config['metric'] = metric
# Optimizers - Adam, SGD
parameters_dict = {
    'optimizer': {
        'values': ['adam', 'sgd']
        },
    'dropout': {
          'values': [0.1, 0.05]
        },
    }

sweep_config['parameters'] = parameters_dict

parameters_dict.update({
    'epochs': {
        'value': 20}
    })

import math
# Set learning_rate, batch_size
parameters_dict.update({
    'learning_rate': {
         'values': [0.005,0.008,0.01,.03]  
      },
    'batch_size': {
        'values': [1024,2048]
      }
    })

import pprint
pprint.pprint(sweep_config)

'method': 'grid',
 'metric': {'goal': 'minimize', 'name': 'val_loss'},
 'parameters': {'batch_size': {'values': [1024, 2048]},
                'dropout': {'values': [0.1, 0.05]},
                'epochs': {'value': 20},
                'learning_rate': {'values': [0.005, 0.008, 0.01, 0.03]},
                'optimizer': {'values': ['adam', 'sgd']}}}

f) Wrap the Training Loop


def sweep_train(config_defaults=None):
    # Initialize wandb with a sample project name
    with wandb.init(config=config_defaults):  # this gets over-written in the Sweep

        # Specify the other hyperparameters to the configuration, if any
        wandb.config.architecture_name = "DL"
        wandb.config.dataset_name = "T20"
        # initialize model
        #model = T20Net(wandb.config.dropout)

        train(model,
              wandb.config.batch_size,
              wandb.config.epochs,
              wandb.config.learning_rate,
              wandb.config.optimizer)

g) Initialise Sweep and Run Agent

sweep_id = wandb.sweep(sweep_config, project="sweeps-keras-t20")
wandb.agent(sweep_id, sweep_train, count=10)
wandb: WARNING Calling wandb.login() after wandb.init() has no effect.
wandb: Agent Starting Run: zbaaq0bn with config:
wandb: 	batch_size: 1024
wandb: 	dropout: 0.1
wandb: 	epochs: 20
wandb: 	learning_rate: 0.005
wandb: 	optimizer: adam

Epoch 19/20
1061/1063 [============================>.] - ETA: 0s - loss: 0.3073 - accuracy: 0.8490/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py:3000: UserWarning: You are saving your model as an HDF5 file via `model.save()`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')`.
  saving_api.save_model(
wandb: Adding directory to artifact (/content/wandb/run-20231004_065327-zbaaq0bn/files/model-best)... Done. 0.0s
1063/1063 [==============================] - 15s 14ms/step - loss: 0.3073 - accuracy: 0.8490 - val_loss: 0.3093 - val_accuracy: 0.8479
Epoch 20/20
1062/1063 [============================>.] - ETA: 0s - loss: 0.3052 - accuracy: 0.8502/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py:3000: UserWarning: You are saving your model as an HDF5 file via `model.save()`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')`.
  saving_api.save_model(
wandb: Adding directory to artifact (/content/wandb/run-20231004_065327-zbaaq0bn/files/model-best)... Done. 0.0s
1063/1063 [==============================] - 18s 17ms/step - loss: 0.3052 - accuracy: 0.8502 - val_loss: 0.3068 - val_accuracy: 0.8490
Waiting for W&B process to finish... (success).
Run history:

accuracy	▁▅▅▆▆▆▇▇▇▇▇▇▇███████
epoch	▁▁▂▂▂▃▃▄▄▄▅▅▅▆▆▇▇▇██
loss	█▅▄▃▃▃▃▂▂▂▂▂▂▂▁▁▁▁▁▁
val_accuracy	▁▂▃▄▄▅▅▅▆▆▆▇▇▇▇▇▇███
val_loss	█▆▅▅▄▄▃▃▃▃▃▂▂▂▂▂▂▁▁▁

Run summary:

accuracy	0.85022
best_epoch	19
best_val_loss	0.30681
epoch	19
loss	0.30521
val_accuracy	0.849
val_loss	0.30681

...
...
wandb: Agent Starting Run: 4qtyxzq9 with config:
wandb: 	batch_size: 1024
wandb: 	dropout: 0.1
wandb: 	epochs: 20
wandb: 	learning_rate: 0.008
wandb: 	optimizer: sgd
...
...

Epoch 18/20
1063/1063 [==============================] - 13s 12ms/step - loss: 0.2672 - accuracy: 0.8697 - val_loss: 0.2819 - val_accuracy: 0.8624
Epoch 19/20
1061/1063 [============================>.] - ETA: 0s - loss: 0.2669 - accuracy: 0.8697/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py:3000: UserWarning: You are saving your model as an HDF5 file via `model.save()`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')`.
  saving_api.save_model(
wandb: Adding directory to artifact (/content/wandb/run-20231004_070920-4qtyxzq9/files/model-best)... Done. 0.0s
1063/1063 [==============================] - 14s 13ms/step - loss: 0.2669 - accuracy: 0.8697 - val_loss: 0.2813 - val_accuracy: 0.8635
Epoch 20/20
1063/1063 [==============================] - 13s 12ms/step - loss: 0.2650 - accuracy: 0.8707 - val_loss: 0.2957 - val_accuracy: 0.8557
Waiting for W&B process to finish... (success).
6.805 MB of 6.818 MB uploaded (0.108 MB deduped)
Run history:

accuracy	▁▂▃▃▄▄▄▄▄▄▄▄▅▅▄▆▅▆▆█
epoch	▁▁▂▂▂▃▃▄▄▄▅▅▅▆▆▇▇▇██
loss	█▇▆▆▅▅▅▅▅▅▅▄▄▄▄▄▄▃▃▁
val_accuracy	▇▅▅▁█▅▇▆▆▅█▅▅▆▃▇▁▇█▁
val_loss	▃▄▄▅▁▃▂▃▃▃▁▄▄▂▆▂█▁▁█

Run summary:

accuracy	0.87067
best_epoch	18
best_val_loss	0.28127
epoch	19
loss	0.26499
val_accuracy	0.85565
val_loss	0.29573
...
...
wandb: Agent Starting Run: lt2fknva with config:
wandb: 	batch_size: 1024
wandb: 	dropout: 0.1
wandb: 	epochs: 20
wandb: 	learning_rate: 0.01
wandb: 	optimizer: adam
Tracking run with wandb version 0.15.11
Run data is saved locally in /content/wandb/run-20231004_071359-lt2fknva
Syncing run lively-sweep-5 to Weights & Biases (docs)
...
...
Epoch 19/20
1063/1063 [==============================] - 14s 13ms/step - loss: 0.2779 - accuracy: 0.8651 - val_loss: 0.2883 - val_accuracy: 0.8607
Epoch 20/20
1060/1063 [============================>.] - ETA: 0s - loss: 0.2795 - accuracy: 0.8643/usr/local/lib/python3.10/dist-packages/keras/src/engine/training.py:3000: UserWarning: You are saving your model as an HDF5 file via `model.save()`. This file format is considered legacy. We recommend using instead the native Keras format, e.g. `model.save('my_model.keras')`.
  saving_api.save_model(
wandb: Adding directory to artifact (/content/wandb/run-20231004_071359-lt2fknva/files/model-best)... Done. 0.0s
1063/1063 [==============================] - 16s 15ms/step - loss: 0.2795 - accuracy: 0.8643 - val_loss: 0.2831 - val_accuracy: 0.8620
Waiting for W&B process to finish... (success).
Run history:

accuracy	▁▁▁▂▂▃▃▄▅▅▅▆▆▆▆▆▇▇█▇
epoch	▁▁▂▂▂▃▃▄▄▄▅▅▅▆▆▇▇▇██
loss	███▇▇▆▅▆▅▄▄▃▃▃▃▂▂▂▁▂
val_accuracy	▁▅▂▆▆▅▂▆▆▅▇▇▆▇▅▃▃▆▇█
val_loss	▇▆▇▅▃▅█▆▅▄▂▃▄▂▆▆▇▃▃▁

Run summary:

accuracy	0.8643
best_epoch	19
best_val_loss	0.28309
epoch	19
loss	0.27949
val_accuracy	0.86195
val_loss	0.28309
...
...

In the W & B site each of the runs or captured very nicely

The best model is ‘lively-sweep-5‘ with the lowest validation loss

The picture below gives the validation loss for various combinations of the hyper-para meters

It is very easy to visually pick the best model with loss as shown below. It is lively-sweep-5. we can see the values of the hyper-parameters for this DL model

Details of optimal Deep Learning model

a. Run – lively-sweep-5

b. optimizer – adam

c. learning_rate – 0.01

d. batch_size – 1024

e. dropout – 0.1

We can see the performance of this model individually by clicking lively-sweep-6 on the left panel

It was good fun to play around with the Weights and Biases in selecting an optimal model

See also

  1. Deconstructing Convolutional Neural Networks with Tensorflow and Keras
  2. Deep Learning from first principles in Python, R and Octave – Part 7
  3. Identifying cricketing shots using AI
  4. Introducing cricket package yorkr: Part 2-Trapped leg before wicket!
  5. Introducing cricpy:A python package to analyze performances of cricketers

To see all posts click Index of posts

Identifying cricketing shots using AI

Image classification using Deep Learning has been around for almost a decade. In fact, this field with the use of Convolutional Neural Networks (CNN) is quite mature and the algorithms work very well in image classification, object detection, facial recognition and self-driving cars. In this post, I use AI image classification to identify cricketing shots. While the problem falls in a well known domain, the application of image  classification in identifying cricketing shots is probably new. I have selected three cricketing shots, namely, the front drive, sweep shot, and the hook shot for this purpose. My purpose was to build a proof-of-concept and not a perfect product. I have kept the dataset deliberately small (for obvious reasons) of just about 14 samples for each cricketing shot, and for a total of about 41 total samples for both training and test data. Anyway, I get a reasonable performance from the AI model.

Included below are some examples of the data set

This post is based on this or on Image classification from Hugging face. Interestingly, this, the model used here is based on Vision Transformers (ViT from Google Brain) and not on Convolutional Neural Networks as is usually done.

The steps are to fine-tune ViT Transformer with the ‘strokes’ dataset are

  1. Install the necessary libraries
! pip install transformers[torch] datasets evaluate accelerate -U
! pip install -U accelerate
! pip install -U transformers

b) Login to Hugging Face account

 from huggingface_hub import notebook_login
notebook_login()

Login successful

c) Load the batting strokes dataset with 41 images

from datasets import load_dataset
df1 = load_dataset("tvganesh/strokes",split='train')
type(df1)
len(df1)

41
df1
Dataset({
    features: ['image', 'label'],
    num_rows: 41
})

d) Create a dictionary that maps the label name to an integer and vice versa. Display the labels

labels = df1.features["label"].names
label2id, id2label = dict(), dict()
for i, label in enumerate(labels):
label2id[label] = str(i)
id2label[str(i)] = label

labels

['front drive', 'hook shot', 'sweep shot']

e) Load ViT image processor. To apply the correct transformations, ImageProcessor is initialised with a configuration that was saved along with the pretrained model 

from transformers import AutoImageProcessor

checkpoint = "google/vit-base-patch16-224-in21k"
image_processor = AutoImageProcessor.from_pretrained(checkpoint)

f) Apply image transformations to the images to make the model more robust against overfitting

from torchvision.transforms import RandomResizedCrop, Compose, Normalize, ToTensor

normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std)
size = (
    image_processor.size["shortest_edge"]
    if "shortest_edge" in image_processor.size
    else (image_processor.size["height"], image_processor.size["width"])
)
_transforms = Compose([RandomResizedCrop(size), ToTensor(), normalize])

g) Create a preprocessing function to apply the transforms and return pixel_values of the image as the inputs to the model – :

def transforms(examples):
    examples["pixel_values"] = [_transforms(img.convert("RGB")) for img in examples["image"]]
    del examples["image"]
    return examples

h) Apply the preprocessing function over the entire dataset, using Hugging Face Dataset’s ‘with_transform’ method

df1 = df1.with_transform(transforms)
from transformers import DefaultDataCollator
data_collator = DefaultDataCollator()

i) Evaluate model’s performance with evaluate

import evaluate
accuracy = evaluate.load("accuracy")

j) Calculate accuracy by passing in predictions and labels

import numpy as np
def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    predictions = np.argmax(predictions, axis=1)
    return accuracy.compute(predictions=predictions, references=labels)

k) Load ViT by specifying the number of labels along with the number of expected labels, and the label mapping

 from transformers import AutoModelForImageClassification, TrainingArguments, Trainer

model = AutoModelForImageClassification.from_pretrained(
    checkpoint,
    num_labels=len(labels),
    id2label=id2label,
    label2id=label2id,
)

l)

  1. Pass the training arguments to Trainer along with the model, dataset, tokenizer, data collator, and compute_metrics function.
  2. Call train() to finetune your model.
training_args = TrainingArguments(
    output_dir="data_classify",
    remove_unused_columns=False,
    evaluation_strategy="epoch",
    save_strategy="epoch",
    learning_rate=5e-5,
    per_device_train_batch_size=8,
    #gradient_accumulation_steps=4,
    per_device_eval_batch_size=6,
    num_train_epochs=20,
    warmup_ratio=0.1,
    logging_steps=10,
    load_best_model_at_end=True,
    metric_for_best_model="accuracy",
    push_to_hub=True,
)

trainer = Trainer(
    model=model,
    args=training_args,
    data_collator=data_collator,
    train_dataset=train_dataset,
    eval_dataset=test_dataset,
    tokenizer=image_processor,
    compute_metrics=compute_metrics,
)

trainer.train()

Epoch	Training Loss	Validation Loss	Accuracy
1	No log	0.434451	1.000000
2	No log	0.388312	1.000000
3	0.361200	0.409932	0.888889
4	0.361200	0.245226	1.000000
5	0.293400	0.196930	1.000000
6	0.293400	0.167858	1.000000
7	0.293400	0.140349	1.000000
8	0.203000	0.153016	1.000000
9	0.203000	0.116115	1.000000
10	0.150500	0.129171	1.000000
11	0.150500	0.103121	1.000000
12	0.150500	0.108433	1.000000
13	0.138800	0.107799	1.000000
14	0.138800	0.093700	1.000000
15	0.107600	0.100769	1.000000
16	0.107600	0.113148	1.000000
17	0.107600	0.100740	1.000000
18	0.104700	0.177483	0.888889
19	0.104700	0.084438	1.000000
20	0.090200	0.112654	1.000000
TrainOutput(global_step=80, training_loss=0.18118578270077706, metrics={'train_runtime': 176.3834, 'train_samples_per_second': 3.628, 'train_steps_per_second': 0.454, 'total_flos': 4.959531785650176e+16, 'train_loss': 0.18118578270077706, 'epoch': 20.0})

m) Push to Hub

trainer.push_to_hub()

You can try out my fine-tuned model at identify_stroke̱

Here are a couple of trials

As I mentioned before, the model should be reasonably accurate but not perfect, since my training dataset is extremely small. This is just a prototype to show that shot identification in cricket with AI is in the realm of the possible.

References

  1. Image classification
  2. AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE

Do take a look at

  1. Using Reinforcement Learning to solve Gridworld
  2. Deconstructing Convolutional Neural Networks with Tensorflow and Keras
  3. GenerativeAI:Using T5 Transformer model to summarise Indian Philosophy
  4. GooglyPlusPlus: Win Probability using Deep Learning and player embeddings
  5. T20 Win Probability using CTGANs, synthetic data
  6. Deep Learning from first principles in Python, R and Octave – Part 6
  7. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
  8. Big Data 6: The T20 Dance of Apache NiFi and yorkpy
  9. Re-introducing cricketr! : An R package to analyze performances of cricketers

To see all posts click Index of posts

GenerativeAI:Using T5 Transformer model to summarise Indian Philosophy

Ever since I started to use ChatGPT,  I have been fascinated by its capabilities. To a large extent, the abilities of  Large  Language Models (LLMs) is quite magical – the way it answers questions, the way it summarises passages, the way it creates  poems et cetera. All the LLMs need is a large corpus of data from the internet, articles, wikis, blogs, and so on.

On delving a little deeper into Generative AI, LLMs I learnt that, this is based on the principle of being able to predict the most probable word in a given sequence. It made me wonder whether the world of ideas, language and communication are actually governed by probabilities. Does what we communicate fall within the purview of statistics?

As an aside, just by extending further if we visualise a world in which  every human action to a situation is assigned an embedding vector, and if we feed the responses of all humans over time in  different situations, to the equivalent of a Transformer of a Large Human Reaction Model (LHRM) ;-),  we can envisage the model being capable of predicting the response of human in a given situation. In my opinion, the machine would be fairly right most of the occasions as it could select the most probable choice of action, much like ‘The Machine’ in Person of Interest. However, this does not mean that the machine (AI) is actually more intelligent than humans. All it means is that the choice of humans responses are a part of a finite subset possibilities and The Machine (AI) can compute the possibilities and associated probabilities much quicker than humans. Does it mean that the world is deterministic? Possibly.

In this post, I use the T5 transformer to summarise Indian philosophy. For this task, I have fine-tuned the T5 model with a curated dataset taken from random passages on Hindu philosophy available on the internet. For each passage, I had to and hand-create the corresponding summary. This was a fairly tedious and demanding task but an enlightening one. It was interesting to understand how our ancestors, the Rishis, understood reality, the physical world, senses, the mind, the intellect, consciousness (Atman) and universal consciousness (Brahman). (Incidentally I was only able to curate only about 130 rows of philosophical snippets and manually create the corresponding summaries. Probably this is a very small dataset for fine-tuning but I just wanted to see the performance of the T5 model in a new domain.)

In this post the T5 model is fine-tuned with the curated dataset and the rouge1 and rouge2 scores are used to evaluate the model’s performance.

I have used the Hugging Face Hub for the transformer model, corresponding LLM functions and management of the dataset etc. The Hugging Face ecosystem is simply wow!!

Summarisation with T5-small model

a) Install the necessary libraries

! pip install transformers[torch] datasets evaluate rouge_score accelerate -U
! pip install -U accelerate
! pip install -U transformers

b) Login to Hugging Face account


from huggingface_hub import notebook_login
notebook_login()

Login successful

c) Load the curated dataset on Hindu philosophy

from datasets import load_dataset
df1 = load_dataset("tvganesh/philosophy",split='train')

d) Load a T5 tokenizer to process text and summary

  1. Prefix the input with a prompt so T5 knows this is a summarization task.
  2. Use the keyword text_target argument when tokenizing labels.
  3. Truncate sequences to be no longer than the maximum length set by the max_length parameter. The max_length of the text kept at 220 words and the max_length  of the summary is kept at 50 words.
  4. The ‘map’ function of the Huggingface dataset can be used to apply the pre_process function across the entire data.
from transformers import AutoTokenizer

checkpoint = "t5-small"
#checkpoint = "facebook/bart-large-cnn"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)

prefix = "summarize: "

def preprocess_function(passages):
    inputs = [prefix + doc for doc in passages["text"]]
    model_inputs = tokenizer(inputs, max_length=220, truncation=True)

    labels = tokenizer(text_target=passages["summary"], max_length=50, truncation=True)

    model_inputs["labels"] = labels["input_ids"]
    return model_inputs

tokenized_df1 = df1.map(preprocess_function, batched=True)

DataCollatorForSeq2Seq can be used to dynamically pad the sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.

from transformers import DataCollatorForSeq2Seq
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=checkpoint)

e) Evaluate performance of Model

The rouge1,rouge2  metric can be used to evaluate the performance of the model

import evaluate
rouge = evaluate.load("rouge")

f)Create a function compute_metrics that passes your predictions and labels to ‘compute’ to calculate the ROUGE metric:

import numpy as np

def compute_metrics(eval_pred):
    # evaluate predictions and labels
    predictions, labels = eval_pred
    decoded_preds = tokenizer.batch_decode(predictions, skip_special_tokens=True)
    labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
    decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
     # compute rouge score between the labels and predictions
    result = rouge.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True)

    prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions]
    result["gen_len"] = np.mean(prediction_lens)

    return {k: round(v, 4) for k, v in result.items()}

g) Split the data into training(80%)  and test(20%) data set

train_dataset = tokenized_df1.shuffle(seed=42).select(range(100))
test_dataset = tokenized_df1.shuffle(seed=42).select(range(30))

len(train_dataset)

h) Train the model with AutoModelForSeq2SeqLM

from transformers import AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint)

i)

  1. Set training hyperparameters in Seq2SeqTrainingArguments. The Adam optimization, with learning rate, beta1 & beta2 are used
  2. Pass the training arguments to Seq2SeqTrainer along with the model, dataset, tokenizer, data collator, and compute_metrics function.
  3. Call train() to finetune your model.
training_args = Seq2SeqTrainingArguments(
    output_dir="philosophy_model",
    evaluation_strategy="epoch",
    learning_rate= 5.6e-03,
    adam_beta1=0.9,
    adam_beta2=0.99,
    adam_epsilon=1e-06,
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8,
    weight_decay=0.01,
    save_total_limit=3,
    num_train_epochs=20,
    predict_with_generate=True,
    fp16=True,
    push_to_hub=True,
)

trainer = Seq2SeqTrainer(
    model=model,
    args=training_args,
    train_dataset=train_dataset,
    eval_dataset=test_dataset,
    tokenizer=tokenizer,
    data_collator=data_collator,
    compute_metrics=compute_metrics,
)

trainer.train()

Epoch	Training Loss	Validation Loss	Rouge1	Rouge2	Rougel	Rougelsum	Gen Len
1	No log	2.246223	0.363200	0.146200	0.311400	0.312600	18.333300
2	No log	1.461140	0.459000	0.303900	0.417800	0.417800	18.566700
3	No log	0.832312	0.546500	0.425900	0.524700	0.520800	17.133300
4	No log	0.472341	0.616100	0.517600	0.601000	0.600400	18.366700
5	No log	0.312106	0.681200	0.607800	0.674700	0.671400	18.233300
6	No log	0.154585	0.741800	0.702300	0.733800	0.731300	18.066700
7	No log	0.112100	0.783200	0.763000	0.780200	0.778900	18.500000
8	No log	0.069882	0.801400	0.788200	0.802700	0.800900	18.533300
9	No log	0.045941	0.795800	0.780500	0.794600	0.791700	18.500000
10	No log	0.051655	0.809100	0.795800	0.810500	0.809000	18.466700
11	No log	0.035792	0.799400	0.785200	0.797300	0.794600	18.500000
12	No log	0.041766	0.779900	0.754800	0.774700	0.773200	18.266700
13	No log	0.010703	0.810000	0.800400	0.810700	0.809000	18.500000
14	No log	0.006519	0.807700	0.797100	0.809400	0.807500	18.500000
15	No log	0.017779	0.808000	0.796000	0.809400	0.807500	18.366700
16	No log	0.001681	0.810000	0.800400	0.810700	0.809000	18.500000
17	No log	0.005469	0.810000	0.800400	0.810700	0.809000	18.500000
18	No log	0.002003	0.810000	0.800400	0.810700	0.809000	18.500000
19	No log	0.000638	0.810000	0.800400	0.810700	0.809000	18.500000
20	No log	0.000498	0.810000	0.800400	0.810700	0.809000	18.500000
TrainOutput(global_step=260, training_loss=0.6491916949932391, metrics={'train_runtime': 57.99, 'train_samples_per_second': 34.489, 'train_steps_per_second': 4.484, 'total_flos': 101132046434304.0, 'train_loss': 0.6491916949932391, 'epoch': 20.0})

As we can see the rouge1 to rouge2 scores are fairly good. Anything above 0.5 is considered good. Maybe this is because the T5 model has already been pre-trained on a fairly large philosophical dataset

j) Push to hub

trainer.push_to_hub()

k) Summarise using pipeline

text = "summarize: A seeker who has the necessary qualifications, in order that he may be redeemed from his inner weaknesses, attachments, animalisms and false values is advised to serve with devotion a Teacher who is well- established in the experience of the Self."

from transformers import pipeline

summarizer = pipeline("summarization", model="tvganesh/philosophy_model")
summarizer(text)

[{'summary_text': 'A seeker who has the necessary qualifications will be able to free oneself of sense objects, and one cannot expect this to happen without any mental tossing'}]

l) Summarise using model generate

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("tvganesh/philosophy_model")
inputs = tokenizer(text, return_tensors="pt").input_ids

from transformers import AutoModelForSeq2SeqLM

model = AutoModelForSeq2SeqLM.from_pretrained("tvganesh/philosophy_model")
outputs = model.generate(inputs, max_new_tokens=70, do_sample=False)

tokenizer.decode(outputs[0], skip_special_tokens=True)

'A seeker who has the necessary qualifications will help in his journey to redeem himself'

l) Number of beams

summary_ids = model.generate(inputs,
                                    num_beams=10,
                                    no_repeat_ngram_size=3,
                                    min_length=20,
                                    max_length=70,
                                    early_stopping=True)
output = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
output

'A seeker who has the necessary qualifications will be able to free himself of sense objects and false values'

I also tried Facebook’s BART Large model but the performance was not good at all.

You can try out the model at the following link philosophy_model

Anyway this was a good learning experience.

References

  1. Summarisation
  2. Fine-tune a pre-trained model
  3. Generative AI with Large Language Models , Coursera

Also see

  1. Deep Learning from first principles in Python, R and Octave – Part 4
  2. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
  3. Computing IPL player similarity using Embeddings, Deep Learning
  4. Natural language processing: What would Shakespeare say?
  5. Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket
  6. Revisiting World Bank data analysis with WDI and gVisMotionChart
  7. Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR
  8. Sea shells on the seashore
  9. Experiments with deblurring using OpenCV
  10. A closer look at “Robot Horse on a Trot” in Android

To see all posts click Index of posts

Computing IPL player similarity using Embeddings, Deep Learning

In this post, I revisit the visualisation of IPL batsman and bowler similarities using Google’s Embedding Projector. I had previously done this using multivariate regression in my earlier post ‘Using embeddings, collaborative filtering with Deep Learning to analyse T20 players.’ However, I was not too satisfied with the result since I was not getting the required accuracy.

This post uses the win-loss status of IPL matches from 2014 onwards upto 2023 in Logistic Regression with Deep Learning. A 16-dimensional embedding layer is added for the batsman and the bowler for ball-by-ball data. Since I have used a reduced size data set (from 2014) I get a slightly reduced accuracy, but still I think this is a well-formulated problem.

A Deep Learning network performs gradient descent based using Adam optimisation to arrive at an accuracy of 0.8047. The weights of the learnt Deep Learning network in ‘layer 0’ is used for displaying the batsman and bowler similarities.

Similarity measures – Cosine similarity

A cosine similarity is a value that is bound by a constrained range of 0 and 1. The closer the value is to 0 means that the two vectors are orthogonal or perpendicular to each other. When the value is closer to one, it means the angle is smaller and the batsman and bowler are similar.

a) Data set

For the data set only IPL T20 matches from Jan 2014 upto the present (May 2023) was taken. A Deep Learning model using Logistic Regression with batsman and bowler embedding is used to minimise the error. An accuracy of 0.8047 is obtained. In my earlier post ‘GooglyPlusPlus: Win Probability using Deep Learning and player embeddings‘ I had used data from all T20 leagues (~1.2 million rows) and got an accuracy of 0.8647

b) Import the data

import pandas as pd
import numpy as np
from zipfile import ZipFile
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from pathlib import Path
import matplotlib.pyplot as plt

import pandas as pd
import numpy as np
from zipfile import ZipFile
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers

df1=pd.read_csv('ipl2014_23.csv')
print("Shape of dataframe=",df1.shape)

train_dataset = df1.sample(frac=0.8,random_state=0)
test_dataset = df1.drop(train_dataset.index)
train_dataset1 = train_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_dataset1 = test_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
train_dataset1
train_labels = train_dataset.pop('isWinner')
test_labels = test_dataset.pop('isWinner')
train_dataset1

a=train_dataset1.describe()
stats=a.transpose

Shape of dataframe= (138896, 10)
batsmanIdx	bowlerIdx	ballNum	ballsRemaining	runs	runRate	numWickets	runsMomentum	perfIndex
count	111117.000000	111117.000000	111117.000000	111117.000000	111117.000000	111117.000000	111117.000000	111117.000000	111117.000000
mean	218.672939	169.204145	120.372067	60.749822	86.881701	1.636353	2.423167	0.296061	10.578927
std	118.405729	96.934754	69.991408	35.298794	51.643164	2.672564	2.085956	0.620872	4.436981
min	1.000000	1.000000	1.000000	1.000000	-5.000000	-5.000000	0.000000	0.057143	0.000000
25%	111.000000	89.000000	60.000000	30.000000	45.000000	1.160000	1.000000	0.106383	7.733333
50%	220.000000	170.000000	119.000000	60.000000	85.000000	1.375000	2.000000	0.142857	10.329545
75%	325.000000	249.000000	180.000000	91.000000	126.000000	1.640000	4.000000	0.240000	13.108696
max	411.000000	332.000000	262.000000	135.000000	258.000000	251.000000	10.000000	11.000000	66.000000

c) Create a Deep Learning ML model using batsman & bowler embeddings

import pandas as pd
import numpy as np
from keras.layers import Input, Embedding, Flatten, Dense
from keras.models import Model
from keras.layers import Input, Embedding, Flatten, Dense, Reshape, Concatenate, Dropout
from keras.models import Model

tf.random.set_seed(432)
# create input layers for each of the predictors
batsmanIdx_input = Input(shape=(1,), name='batsmanIdx')
bowlerIdx_input = Input(shape=(1,), name='bowlerIdx')
ballNum_input = Input(shape=(1,), name='ballNum')
ballsRemaining_input = Input(shape=(1,), name='ballsRemaining')
runs_input = Input(shape=(1,), name='runs')
runRate_input = Input(shape=(1,), name='runRate')
numWickets_input = Input(shape=(1,), name='numWickets')
runsMomentum_input = Input(shape=(1,), name='runsMomentum')
perfIndex_input = Input(shape=(1,), name='perfIndex')

# Set the embedding size
no_of_unique_batman=len(df1["batsmanIdx"].unique())
print(no_of_unique_batman)
no_of_unique_bowler=len(df1["bowlerIdx"].unique())
print(no_of_unique_bowler)
embedding_size_bat = no_of_unique_batman ** (1/4)
embedding_size_bwl = no_of_unique_bowler ** (1/4)


# create embedding layer for the categorical predictor
batsmanIdx_embedding = Embedding(input_dim=no_of_unique_batman+1, output_dim=16,input_length=1)(batsmanIdx_input)
batsmanIdx_flatten = Flatten()(batsmanIdx_embedding)
bowlerIdx_embedding = Embedding(input_dim=no_of_unique_bowler+1, output_dim=16,input_length=1)(bowlerIdx_input)
bowlerIdx_flatten = Flatten()(bowlerIdx_embedding)

# concatenate all the predictors
x = keras.layers.concatenate([batsmanIdx_flatten,bowlerIdx_flatten, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input])

# add hidden layers
#x = Dense(64, activation='relu')(x)
#x = Dropout(0.1)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(16, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(8, activation='relu')(x)
x = Dropout(0.1)(x)
# add output layer
output = Dense(1, activation='sigmoid', name='output')(x)
print(output.shape)
# create model
model = Model(inputs=[batsmanIdx_input,bowlerIdx_input, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input], outputs=output)
model.summary()

# compile model
optimizer=keras.optimizers.Adam(learning_rate=.01, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=True)

model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])

# train the model
history=model.fit([train_dataset1['batsmanIdx'],train_dataset1['bowlerIdx'],train_dataset1['ballNum'],train_dataset1['ballsRemaining'],train_dataset1['runs'],
           train_dataset1['runRate'],train_dataset1['numWickets'],train_dataset1['runsMomentum'],train_dataset1['perfIndex']], train_labels, epochs=40, batch_size=1024,
          validation_data = ([test_dataset1['batsmanIdx'],test_dataset1['bowlerIdx'],test_dataset1['ballNum'],test_dataset1['ballsRemaining'],test_dataset1['runs'],
           test_dataset1['runRate'],test_dataset1['numWickets'],test_dataset1['runsMomentum'],test_dataset1['perfIndex']],test_labels), verbose=1)

plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "test"], loc="upper left")
plt.show()

d) Project embeddings with Google’s Embedding projector

try:
  # %tensorflow_version only exists in Colab.
  %tensorflow_version 2.x
except Exception:
  pass

%load_ext tensorboard
import os
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorboard.plugins import projector

%pwd
# Set up a logs directory, so Tensorboard knows where to look for files.
log_dir='/logs/batsmen/'
if not os.path.exists(log_dir):
    os.makedirs(log_dir)

df3=pd.read_csv('batsmen.csv')
batsmen = df3["batsman"].unique().tolist()
batsmen
# Create dictionary of batsman to index
batsmen2index = {x: i for i, x in enumerate(batsmen)}
batsmen2index
# Create dictionary of index to batsman
index2batsmen = {i: x for i, x in enumerate(batsmen)}
index2batsmen


# Save Labels separately on a line-by-line manner.
with open(os.path.join(log_dir, 'metadata.tsv'), "w") as f:
  for batsmanIdx in range(1, 411):
       # Get the name of batsman associated at the current index
        batsman = index2batsmen.get([batsmanIdx][0])

        f.write("{}\n".format(batsman))

# Save the weights we want to analyze as a variable. Note that the first
# value represents any unknown word, which is not in the metadata, here
# we will remove this value.
weights = tf.Variable(model.get_weights()[0][1:])
print(weights)
print(type(weights))
print(len(model.get_weights()[0]))
# Create a checkpoint from embedding, the filename and key are the
# name of the tensor.
checkpoint = tf.train.Checkpoint(embedding=weights)
checkpoint.save(os.path.join(log_dir, "embedding.ckpt"))

# Set up config.
config = projector.ProjectorConfig()
embedding = config.embeddings.add()
# The name of the tensor will be suffixed by `/.ATTRIBUTES/VARIABLE_VALUE`.
embedding.tensor_name = "embedding/.ATTRIBUTES/VARIABLE_VALUE"
embedding.metadata_path = 'metadata.tsv'
projector.visualize_embeddings(log_dir, config)
# Now run tensorboard against on log data we just saved.

%reload_ext tensorboard
%tensorboard --logdir /logs/batsmen/

e) Here are similarity measures for some batsmen

I) Principal Component Analysis (PCA) : In the charts and video animation below, the 16-dimensional embedding vector of batsmen and bowler is reduced to 3 principal components in a lower dimension for visualisation and analysis as shown below

a) Yashasvi Jaiswal (similar players

i) PCA – Chart

Yashasvi Jaiswal style of attack is similar to Faf Du Plessis, Quentim De Kock, Bravo etc. In the below chart the ange between Jaiswal and SP Narine is 0.109, and Faf du Plessis is 0.253. These represent the angle in radians. The smaller the angle the more similar the performance style of the players and cos 0=1 or the players are similar.

ii) PCA animation video for Yashasvi Jaiswal

b) Suryakumar Yadav (SKY)

i) PCA -Chart

The closest neighbours for SKY is RV Uthappa, Rahul Tripathi, Q de Kock, Samson, Rashid Khan

ii) PCA – Animation video for Suryakumar Yadav

c) M S Dhoni

i) PCA – Chart

Dhoni rubs shoulders with Bravo, AB De Villiers, Shane Watson, Chris Gayle, Rayadu, Gautam Gambhir

ii) PCA – Animation video for M S Dhoni

f) PCA Analysis for bowlers

a) Jasprit Bumrah

i) PCA – Chart

Bumrah bowling performance is similar to Josh Hazzlewood, Chameera, Kuldeep Yadav, Nortje, Adam Zampa etc.

ii) PCA Animation video for Jasprit Bumrah

b) Yuzhvendra Chahal

i) PCA – Chart

Chahal’s performance has a strong similarity to Malinga, Zaheer Khan, Imran Tahir, R Sheperd, Adil Rashid

ii) PCA Animation video for YS Chahal

f) Other similarity measures ( t-SNE & UMAP)

There are 2 other similarity visualisations in Google’s Embedding Projector namely

i) t-SNE (t-distributed Stochastic Neighbor Embedding) – t-SNE tries to find a faithful representation of the data distribution in higher dimensional space to a lower dimensional space. t-SNE differs from PCA by preserving only or local similarities whereas PCA is maintains preserving large pairwise distances.

a) t-SNE Animation video

ii) UMAP – Uniform Manifold Approximation and Projection

UMAP learns the manifold structure of the high dimensional data and finds a low dimensional embedding that preserves the essential topological structure of that manifold.

ii) UMAP – Animation video

The Embedding projector thus helps in identifying players based on how they perform against bowlers, and probably picks up a lot of features like strike rate and performance in different stages of the game.

Hope you enjoyed the post!

Also see

  1. Exploring Quantum Gate operations with QCSimulator
  2. De-blurring revisited with Wiener filter using OpenCV
  3. Using Reinforcement Learning to solve Gridworld
  4. Deep Learning from first principles in Python, R and Octave – Part 4
  5. Big Data 6: The T20 Dance of Apache NiFi and yorkpy
  6. Latency, throughput implications for the Cloud
  7. Programming languages in layman’s language
  8. Practical Machine Learning with R and Python – Part 6
  9. Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket
  10. A closer look at “Robot Horse on a Trot” in Android

To see all posts click Index of posts

Cricketr analyzes Ind-Aus faceoff in WTC 2023!!

“The unexamined life is not worth living.” – Socrates

“There is no easy way from the earth to the stars.” – Seneca

“If you want to go fast, go alone. If you want to go far, go together.” – African Proverb

1. Introduction

In this post, I put my R package cricketr to analyze the Indian and Australia World Test Championship (WTC) final squad ahead of the World Test Championship 2023.My R package cricketr had its birth on Jul 4, 2015. Cricketr uses data from Cricinfo.

You can download the latest PDF version of the book  at  ‘Cricket analytics with cricketr and cricpy: Analytics harmony with R and Python-6th edition

Indian squad

Rohit Sharma (Captain), Shubman Gill, Cheteshwar Pujara, Virat Kohli, Ajinkya Rahane, Ravindra Jadeja, Shardul Thakur, Mohd. Shami, Mohd. Siraj, Ishan Kishan (wk).

According to me, Ishan Kishan has more experience than KS Bharat, though Rishabh Pant would have been the ideal wicket keeper/left-handed batsman. I think Shardul Thakur would be handful in the English conditions. For a spinner it either Ashwin or Jadeja. Maybe the balance shifts in favor of Jadeja

Australian squad

Pat Cummins (capt), Alex Carey (wk), Cameron Green, Josh Hazlewood, Usman Khawaja, Marnus Labuschagne, Nathan Lyon, Todd Murphy, Steven Smith (vice-capt), Mitchell Starc, David Warner.

Not sure if Scott Boland would fill in, instead of Todd Murphy 1

Let me give you a lay-of-the-land (post) below

The post below is organized into the following parts

  1. Analysis of Indian WTC batsmen from Jan 2016 – May 2023
  2. Analysis of Indian WTC batsmen against Australia from Jan 2016 -May 2023
  3. Analysis of Australian WTC batsmen from Jan 2016 – May 2023
  4. Analysis of Australian WTC batsmen against India from Jan 2016 -May 2023
  5. Analysis of Indian WTC bowlers from Jan 2016 – May 2023
  6. Analysis of Indian WTC bowlers against Australia from Jan 2016 -May 2023
  7. Analysis of Australian WTC bowlers from Jan 2016 – May 2023
  8. Analysis of Australian WTC bowlers gainst India from Jan 2016 -May 2023
  9. Team analysis of India and Australia

All the above analysis use data from ESPN Statsguru and use my R pakage cricketr

The data for the different players have been obtained using calls such as the ones below.

# Get Shubman Gill's batting data
#shubman <-getPlayerData(1070173,dir=".",file="shubman.csv",type="batting",homeOrAway=c(1,2), result=c(1,2,4))
#shubmansp <- getPlayerDataSp(1070173,tdir=".",tfile="shubmansp.csv",ttype="batting")

#Get Shubman Gill's data from Jan 2016 - May 2023
#df <-getPlayerDataHA(1070173,tfile="shubman1.csv",type="batting", matchType="Test")
#df1=getPlayerDataOppnHA(infile="shubman1.csv",outfile="shubmanTestAus.csv",startDate="2016-01-01",endDate="2023-05-01")

#Get Shubman Gills data from Jan 2016 - May 2023, against Australia
#df <-getPlayerDataHA(1070173,tfile="shubman1.csv",type="batting", matchType="Test")
#df1=getPlayerDataOppnHA(infile="shubman1.csv",outfile="shubmanTestAus.csv",opposition="Australia",startDate="2016-01-01",endDate="2023-05-01")

Note: To get data for bowlers we need to use the corresponding profile no and use type =‘bowling’. Details in my posts below

To do similar analysis please go through the following posts

  1. Re-introducing cricketr! : An R package to analyze performances of cricketers
  2. Cricketr learns new tricks : Performs fine-grained analysis of players
  3. Cricketr adds team analytics to its repertoire!!!

Note 1: I will not be analysing each and every chart as the charts are quite self-explanatory 

Note 2: I have had to tile charts together otherwise this will become a very, very long post. You are free to use my R package cricketr and check out for yourself ##3. Analysis of India WTC batsmen from Jan 2016 – May 2023

Findings

  1. Kohli has the best average of 48+. India has won when Rohit and Rahane played well
  2. Kohli’s tops the list in cumulative average runs, followed by Pujara and Rohit is 3rd. Gill is on the upswing.
  3. Against Australia Pujara has the best cumulative average runs record followed by Rahane, with Gill in hot pursuit. In the strike rate department Gill tops followed by Rohit and Rahane
  4. Since 2016 Smith, Labuschagne has an average of 53+ since 2016!! Warner & Khwaja are at ~46
  5. Australia has won matches when Smith, Warner and Khwaja have played well.
  6. Labuschagne, Smith and C Green have good records against India. Indian bowlers will need to contain them
  7. Ashwin has the highest wickets followed by Jadeja against all teams. Ashwin’s performance has dropped over the years, while Siraj has been becoming better
  8. Jadeja has the best economy rate followed by Ashwin
  9. Against Australia specifically Jadeja has the best record followed by Ashwin. Jadeja has the best economy against Australia, followed by Siraj, then Ashwin
  10. Cummins, Starc and Lyons are the best performers for Australia. Hazzlewood, Cummins have the best economy against all opposition
  11. Against India Lyon, Cummins and Hazzlewood have performed well
  12. Hazzlewood, Lyon have a good economy rate against India
  13. Against Australia India has won 17 times, lost 60 and drawn 22 in Australia. At home India won 42, tied 2, lost 28 and drawn 24
  14. At the Oval where the World Test Championship is going to be held India has won 4, lost 10 and drawn 10.

Note 3: You can also read this post at Rpubs at ind-aus-WTC!! The formatting will be nicer!

Note 4: You can download this post as PDF to read at your leisure ind-aus-WTC.pdf

2. Install the cricketr package

if (!require("cricketr")){
    install.packages("cricketr",lib = "c:/test")
}
library(cricketr)

3a. Basic analysis

The analyses below include – Runs frequency plot – Mean strike rate – Run Ranges

Kohli’s strike rate increases with increasing runs, while Gill’s seems to drop. So it is with Pujara & Rahane

par(mfrow=c(3,3))
par(mar=c(4,4,2,2))
batsmanRunsFreqPerf("kohliTest.csv","Kohli")
batsmanMeanStrikeRate("kohliTest.csv","Kohli")
batsmanRunsRanges("kohliTest.csv","Kohli")

batsmanRunsFreqPerf("rohitTest.csv","Rohit")
batsmanMeanStrikeRate("rohitTest.csv","Rohit")
batsmanRunsRanges("rohitTest.csv","Rohit")

batsmanRunsFreqPerf("shubmanTest.csv","S Gill")
batsmanMeanStrikeRate("shubmanTest.csv","S Gill")
batsmanRunsRanges("shubmanTest.csv","S Gill")
par(mfrow=c(2,3))
par(mar=c(4,4,2,2))
batsmanRunsFreqPerf("rahaneTest.csv","Rahane")
batsmanMeanStrikeRate("rahaneTest.csv","Rahane")
batsmanRunsRanges("rahaneTest.csv","Rahane")

batsmanRunsFreqPerf("pujaraTest.csv","Pujara")
batsmanMeanStrikeRate("pujaraTest.csv","Pujara")
batsmanRunsRanges("pujaraTest.csv","Pujara")

3b. More analyses

Kohli hits roughly 5 4s in his 50 versus Gill,Pujara who is able to smash 6 4s.

par(mfrow=c(3,3))
par(mar=c(4,4,2,2))

batsman4s("kohliTest.csv","Kohli")
batsman6s("kohliTest.csv","Kohli")
batsmanMeanStrikeRate("kohliTest.csv","Kohli")

batsman4s("rohitTest.csv","Rohit")
batsman6s("rohitTest.csv","Rohit")
batsmanMeanStrikeRate("rohitTest.csv","Rohit")

batsman4s("shubmanTest.csv","S Gill")
batsman6s("shubmanTest.csv","S Gill")
batsmanMeanStrikeRate("shubmanTest.csv","S Gill")
par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

batsman4s("rahaneTest.csv","Rahane")
batsman6s("rahaneTest.csv","Rahane")
batsmanMeanStrikeRate("rahane.csv","Rahane")

batsman4s("pujaraTest.csv","Pujara")
batsman6s("pujaraTest.csv","Pujara")
batsmanMeanStrikeRate("pujaraTest.csv","Pujara")

3c.Boxplot histogram plot

This plot shows a combined boxplot of the Runs ranges and a histog2ram of the Runs Frequency Kohli’s average is 48, while Rohit,Pujara is 40 with Rahane and Gill around 33.

batsmanPerfBoxHist("kohliTest.csv","Kohli")
batsmanPerfBoxHist("rohitTest.csv","Rohit")
batsmanPerfBoxHist("shubmanTest.csv","S Gill")
batsmanPerfBoxHist("rahaneTest.csv","Rahane")
batsmanPerfBoxHist("pujaraTest.csv","Pujara")

3d. Contribution to won and lost matches

For the functions below you will have to use the getPlayerDataSp() function. When Rohit Sharma and Pujara have played well India have tended to win more often

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanContributionWonLost("kohlisp.csv","Kohli")
batsmanContributionWonLost("rohitsp.csv","Rohit")
batsmanContributionWonLost("rahanesp.csv","Rahane")
batsmanContributionWonLost("pujarasp.csv","Pujara")

3e. Performance at home and overseas

This function also requires the use of getPlayerDataSp() as shown above. This can only be used for Test matches

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanPerfHomeAway("kohlisp.csv","Kohli")
batsmanPerfHomeAway("rohitsp.csv","Rohit")
batsmanPerfHomeAway("rahanesp.csv","Rahane")
batsmanPerfHomeAway("pujarasp.csv","Pujara")

3f. Batsman average at different venues

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))
batsmanAvgRunsGround("kohliTest.csv","Kohli")
batsmanAvgRunsGround("rohitTest.csv","Rohit")
batsmanAvgRunsGround("rahaneTest.csv","Rahane")
batsmanAvgRunsGround("pujaraTest.csv","Pujara")

3g. Batsman average against different opposition

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))
batsmanAvgRunsOpposition("kohliTest.csv","Kohli")
batsmanAvgRunsOpposition("rohitTest.csv","Rohit")
batsmanAvgRunsOpposition("rahaneTest.csv","Rahane")
batsmanAvgRunsOpposition("pujaraTest.csv","Pujara")

3h. Runs Likelihood of batsman

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))
batsmanRunsLikelihood("kohli.csv","Kohli")
## Summary of  Kohli 's runs scoring likelihood
## **************************************************
## 
## There is a 52.91 % likelihood that Kohli  will make  12 Runs in  26 balls over 35  Minutes 
## There is a 30.81 % likelihood that Kohli  will make  52 Runs in  100 balls over  139  Minutes 
## There is a 16.28 % likelihood that Kohli  will make  142 Runs in  237 balls over 335  Minutes
batsmanRunsLikelihood("rohit.csv","Rohit")
## Summary of  Rohit 's runs scoring likelihood
## **************************************************
## 
## There is a 43.24 % likelihood that Rohit  will make  10 Runs in  21 balls over 32  Minutes 
## There is a 45.95 % likelihood that Rohit  will make  46 Runs in  85 balls over  124  Minutes 
## There is a 10.81 % likelihood that Rohit  will make  110 Runs in  199 balls over 282  Minutes
batsmanRunsLikelihood("rahane.csv","Rahane")
## Summary of  Rahane 's runs scoring likelihood
## **************************************************
## 
## There is a 7.75 % likelihood that Rahane  will make  124 Runs in  224 balls over 318  Minutes 
## There is a 62.02 % likelihood that Rahane  will make  12 Runs in  26 balls over  37  Minutes 
## There is a 30.23 % likelihood that Rahane  will make  55 Runs in  113 balls over 162  Minutes
batsmanRunsLikelihood("pujara.csv","Pujara")
## Summary of  Pujara 's runs scoring likelihood
## **************************************************
## 
## There is a 60.49 % likelihood that Pujara  will make  15 Runs in  38 balls over 55  Minutes 
## There is a 31.48 % likelihood that Pujara  will make  62 Runs in  142 balls over  204  Minutes 
## There is a 8.02 % likelihood that Pujara  will make  153 Runs in  319 balls over 445  Minutes

3h1. Moving average of batsman

Kohli’s moving average in tests seem to havw dropped after a peak in 2017, 2018. So it is with Rahane

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))
batsmanMovingAverage("kohli.csv","Kohli")
batsmanMovingAverage("rohit.csv","Rohit")
batsmanMovingAverage("rahane.csv","Rahane")
batsmanMovingAverage("pujara.csv","Pujara")

3i. Cumulative Average runs of batsman in career

Kohli’s cumulative average averages to ~48. Shubman Gill’s cumulative average is on the rise.

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))
batsmanCumulativeAverageRuns("kohliTest.csv","Kohli")
batsmanCumulativeAverageRuns("rohitTest.csv","Rohit")
batsmanCumulativeAverageRuns("rahaneTest.csv","Rahane")
batsmanCumulativeAverageRuns("pujaraTest.csv","Pujara")
batsmanCumulativeAverageRuns("shubmanTest.csv","S Gill")

3j Cumulative Average strike rate of batsman in career

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanCumulativeStrikeRate("kohliTest.csv","Kohli")
batsmanCumulativeStrikeRate("rohitTest.csv","Rohit")
batsmanCumulativeStrikeRate("rahaneTest.csv","Rahane")
batsmanCumulativeStrikeRate("pujaraTest.csv","Pujara")
batsmanCumulativeStrikeRate("shubmanTest.csv","S Gill")

3k. Future Runs forecast

Here are plots that forecast how the batsman will perform in future. In this case 90% of the career runs trend is uses as the training set. the remaining 10% is the test set.

A Holt-Winters forecating model is used to forecast future performance based on the 90% training set. The forecated runs trend is plotted. The test set is also plotted to see how close the forecast and the actual matches

Take a look at the runs forecasted for the batsman below.

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanPerfForecast("kohli.csv","Kohli")
batsmanPerfForecast("rohit.csv","Rohit")
batsmanPerfForecast("rahane.csv","Rahane")
batsmanPerfForecast("pujara.csv","Pujara")

3l. Relative Mean Strike Rate plot

The plot below compares the Mean Strike Rate of the batsman for each of the runs ranges of 10 and plots them. The plot indicate the following

frames <- list("kohliTest.csv","rohitTest.csv","pujaraTest.csv","rahaneTest.csv","shubmanTest.csv")
names <- list("Kohli","Rohit","Pujara","Rahane","S Gill")
relativeBatsmanSR(frames,names)

3m. Relative Runs Frequency plot

The plot below gives the relative Runs Frequency Percetages for each 10 run bucket. The plot below show

frames <- list("kohliTest.csv","rohitTest.csv","pujaraTest.csv","rahaneTest.csv","shubmanTest.csv")
names <- list("Kohli","Rohit","Pujara","Rahane","S Gill")
relativeRunsFreqPerf(frames,names)

3n. Relative cumulative average runs in career

Kohli’s tops the list, followed by Pujara and Rohit is 3rd. Gill is on the upswing. Hope he performs well.

frames <- list("kohliTest.csv","rohitTest.csv","pujaraTest.csv","rahaneTest.csv","shubmanTest.csv")
names <- list("Kohli","Rohit","Pujara","Rahane","S Gill")
relativeBatsmanCumulativeAvgRuns(frames,names)

3o. Relative cumulative average strike rate in career

ROhit has the best strike rate followed by Kohli, with Shubman Gill ctaching up fast

frames <- list("kohliTest.csv","rohitTest.csv","pujaraTest.csv","rahaneTest.csv","shubmanTest.csv")
names <- list("Kohli","Rohit","Pujara","Rahane","S Gill")
relativeBatsmanCumulativeStrikeRate(frames,names)

3p. Check Batsman In-Form or Out-of-Form

The below computation uses Null Hypothesis testing and p-value to determine if the batsman is in-form or out-of-form. For this 90% of the career runs is chosen as the population and the mean computed. The last 10% is chosen to be the sample set and the sample Mean and the sample Standard Deviation are caculated.

The Null Hypothesis (H0) assumes that the batsman continues to stay in-form where the sample mean is within 95% confidence interval of population mean The Alternative (Ha) assumes that the batsman is out of form the sample mean is beyond the 95% confidence interval of the population mean.

A significance value of 0.05 is chosen and p-value us computed If p-value >= .05 – Batsman In-Form If p-value < 0.05 – Batsman Out-of-Form

Note Ideally the p-value should be done for a population that follows the Normal Distribution. But the runs population is usually left skewed. So some correction may be needed. I will revisit this later

This is done for the Top 4 batsman

checkBatsmanInForm("kohli.csv","Kohli")
## [1] "**************************** Form status of Kohli ****************************\n\n Population size: 154  Mean of population: 47.03 \n Sample size: 18  Mean of sample: 32.22 SD of sample: 42.45 \n\n Null hypothesis H0 : Kohli 's sample average is within 95% confidence interval of population average\n Alternative hypothesis Ha : Kohli 's sample average is below the 95% confidence interval of population average\n\n Kohli 's Form Status: In-Form because the p value: 0.078058  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBatsmanInForm("rohit.csv","Rohit")
## [1] "**************************** Form status of Rohit ****************************\n\n Population size: 66  Mean of population: 37.03 \n Sample size: 8  Mean of sample: 37.88 SD of sample: 35.38 \n\n Null hypothesis H0 : Rohit 's sample average is within 95% confidence interval of population average\n Alternative hypothesis Ha : Rohit 's sample average is below the 95% confidence interval of population average\n\n Rohit 's Form Status: In-Form because the p value: 0.526254  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBatsmanInForm("rahane.csv","Rahane")
## [1] "**************************** Form status of Rahane ****************************\n\n Population size: 116  Mean of population: 34.78 \n Sample size: 13  Mean of sample: 21.38 SD of sample: 21.96 \n\n Null hypothesis H0 : Rahane 's sample average is within 95% confidence interval of population average\n Alternative hypothesis Ha : Rahane 's sample average is below the 95% confidence interval of population average\n\n Rahane 's Form Status: Out-of-Form because the p value: 0.023244  is less than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBatsmanInForm("pujara.csv","Pujara")
## [1] "**************************** Form status of Pujara ****************************\n\n Population size: 145  Mean of population: 41.93 \n Sample size: 17  Mean of sample: 33.24 SD of sample: 31.74 \n\n Null hypothesis H0 : Pujara 's sample average is within 95% confidence interval of population average\n Alternative hypothesis Ha : Pujara 's sample average is below the 95% confidence interval of population average\n\n Pujara 's Form Status: In-Form because the p value: 0.137319  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBatsmanInForm("shubman.csv","S Gill")
## [1] "**************************** Form status of S Gill ****************************\n\n Population size: 23  Mean of population: 30.43 \n Sample size: 3  Mean of sample: 51.33 SD of sample: 66.88 \n\n Null hypothesis H0 : S Gill 's sample average is within 95% confidence interval of population average\n Alternative hypothesis Ha : S Gill 's sample average is below the 95% confidence interval of population average\n\n S Gill 's Form Status: In-Form because the p value: 0.687033  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"

3q. Predicting Runs given Balls Faced and Minutes at Crease

A multi-variate regression plane is fitted between Runs and Balls faced +Minutes at crease.

BF <- seq( 10, 400,length=15)
Mins <- seq(30,600,length=15)
newDF <- data.frame(BF,Mins)
kohli1 <- batsmanRunsPredict("kohli.csv","Kohli",newdataframe=newDF)
rohit1 <- batsmanRunsPredict("rohit.csv","Rohit",newdataframe=newDF)
pujara1 <- batsmanRunsPredict("pujara.csv","Pujara",newdataframe=newDF)
rahane1 <- batsmanRunsPredict("rahane.csv","Rahane",newdataframe=newDF)
sgill1 <- batsmanRunsPredict("shubman.csv","S Gill",newdataframe=newDF)
batsmen <-cbind(round(kohli1$Runs),round(rohit1$Runs),round(pujara1$Runs),round(rahane1$Runs),round(sgill1$Runs))
colnames(batsmen) <- c("Kohli","Rohit","Pujara","Rahane","S Gill")
newDF <- data.frame(round(newDF$BF),round(newDF$Mins))
colnames(newDF) <- c("BallsFaced","MinsAtCrease")
predictedRuns <- cbind(newDF,batsmen)
predictedRuns
##    BallsFaced MinsAtCrease Kohli Rohit Pujara Rahane S Gill
## 1          10           30     6     3      3      2      7
## 2          38           71    24    19     16     17     24
## 3          66          111    41    35     29     31     40
## 4          94          152    58    51     42     45     56
## 5         121          193    76    66     55     59     73
## 6         149          234    93    82     68     74     89
## 7         177          274   110    98     80     88    106
## 8         205          315   128   114     93    102    122
## 9         233          356   145   129    106    116    139
## 10        261          396   163   145    119    130    155
## 11        289          437   180   161    132    145    171
## 12        316          478   197   177    144    159    188
## 13        344          519   215   192    157    173    204
## 14        372          559   232   208    170    187    221
## 15        400          600   249   224    183    202    237

4. Analysis of India WTC batsmen from Jan 2016 – May 2023 against Australia

4a. Relative cumulative average

Against Australia specifically between 2016 – 2023, Pujara has the best record followed by Rahane, with Gill in hot pursuit. Kohli and Rohit trail behind

frames <- list("kohliTestAus.csv","rohitTestAus.csv","pujaraTestAus.csv","rahaneTestAus.csv","shubmanTestAus.csv")
names <- list("Kohli","Rohit","Pujara","Rahane","S Gill")
relativeBatsmanCumulativeAvgRuns(frames,names)

4b. Relative cumulative average strike rate in career

In the Strike Rate department Gill tops followed by Rohit and Rahane

frames <- list("kohliTestAus.csv","rohitTestAus.csv","pujaraTestAus.csv","rahaneTestAus.csv","shubmanTestAus.csv")
names <- list("Kohli","Rohit","Pujara","Rahane","S Gill")
relativeBatsmanCumulativeStrikeRate(frames,names)

5. Analysis of Australia WTC batsmen from Jan 2016 – May 2023

5a Basic analyses

par(mfrow=c(3,3))
par(mar=c(4,4,2,2))
batsmanRunsFreqPerf("stevesmithTest.csv","S Smith")
batsmanMeanStrikeRate("stevesmithTest.csv","S Smith")
batsmanRunsRanges("stevesmithTest.csv","S Smith")

batsmanRunsFreqPerf("warnerTest.csv","Warner")
batsmanMeanStrikeRate("warnerTest.csv","Warner")
batsmanRunsRanges("warnerTest.csv","Warner")

batsmanRunsFreqPerf("labuschagneTest.csv","M Labuschagne")
batsmanMeanStrikeRate("labuschagneTest.csv","M Labuschagne")
batsmanRunsRanges("labuschagneTest.csv","M Labuschagne")
par(mfrow=c(2,3))
par(mar=c(4,4,2,2))
batsmanRunsFreqPerf("cgreenTest.csv","C Green")
batsmanMeanStrikeRate("cgreenTest.csv","C Green")
batsmanRunsRanges("cgreenTest.csv","C Green")

batsmanRunsFreqPerf("khwajaTest.csv","Khwaja")
batsmanMeanStrikeRate("khwajaTest.csv","Khwaja")
batsmanRunsRanges("khwajaTest.csv","Khwaja")

5b. More analyses

par(mfrow=c(3,3))
par(mar=c(4,4,2,2))
batsman4s("stevesmithTest.csv","S Smith")
batsman6s("stevesmithTest.csv","S Smith")
batsmanMeanStrikeRate("stevesmithTest.csv","S Smith")

batsman4s("warnerTest.csv","Warner")
batsman6s("warnerTest.csv","Warner")
batsmanMeanStrikeRate("warnerTest.csv","Warner")

batsman4s("labuschagneTest.csv","M Labuschagne")
batsman6s("labuschagneTest.csv","M Labuschagne")
batsmanMeanStrikeRate("labuschagneTest.csv","M Labuschagne")
par(mfrow=c(2,3))
par(mar=c(4,4,2,2))
batsman4s("cgreenTest.csv","C Green")
batsman6s("cgreenTest.csv","C Green")
batsmanMeanStrikeRate("cgreenTest.csv","C Green")

batsman4s("khwajaTest.csv","Khwaja")
batsman6s("khwajaTest.csv","Khwaja")
batsmanMeanStrikeRate("khwajaTest.csv","Khwaja")

5c.Boxplot histogram plot

This plot shows a combined boxplot of the Runs ranges and a histog2ram of the Runs Frequency

Smith, Labuschagne has an average of 53+ since 2016!! Warner & Khwaja are at ~46

batsmanPerfBoxHist("stevesmithTest.csv","S Smith")
batsmanPerfBoxHist("warnerTest.csv","Warner")
batsmanPerfBoxHist("labuschagneTest.csv","M Labuschagne")
batsmanPerfBoxHist("cgreenTest.csv","C Green")
batsmanPerfBoxHist("khwajaTest.csv","Khwaja")

5d. Contribution to won and lost matches

For the 2 functions below you will have to use the getPlayerDataSp() function. Australia has won matches when Smith, Warner and Khwaja have played well.

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))
batsmanContributionWonLost("stevesmithsp.csv","S Smith")
batsmanContributionWonLost("warnersp.csv","Warner")
batsmanContributionWonLost("labuschagnesp.csv","M Labuschagne")
batsmanContributionWonLost("cgreensp.csv","C Green")
batsmanContributionWonLost("khwajasp.csv","Khwaja")

5e. Performance at home and overseas

This function also requires the use of getPlayerDataSp() as shown above. This can only be used for Test matches

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanPerfHomeAway("stevesmithsp.csv","S Smith")
batsmanPerfHomeAway("warnersp.csv","Warner")
batsmanPerfHomeAway("labuschagnesp.csv","M Labuschagne")
batsmanPerfHomeAway("cgreensp.csv","C Green")
batsmanPerfHomeAway("khwajasp.csv","Khwaja")

5f. Batsman average at different venues

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanAvgRunsGround("stevesmithTest.csv","S Smith")
batsmanAvgRunsGround("warnerTest.csv","Warner")
batsmanAvgRunsGround("labuschagneTest.csv","M Labuschagne")
batsmanAvgRunsGround("cgreenTest.csv","C Green")
batsmanAvgRunsGround("khwajaTest.csv","Khwaja")

5g. Batsman average against different opposition

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanAvgRunsOpposition("stevesmithTest.csv","S Smith")
batsmanAvgRunsOpposition("warnerTest.csv","Warner")
batsmanAvgRunsOpposition("labuschagneTest.csv","M Labuschagne")
batsmanAvgRunsOpposition("khwajaTest.csv","Khwaja")

5h. Runs Likelihood of batsman

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanRunsLikelihood("stevesmithTest.csv","S Smith")
## Summary of  S Smith 's runs scoring likelihood
## **************************************************
## 
## There is a 58.76 % likelihood that S Smith  will make  21 Runs in  38 balls over 56  Minutes 
## There is a 24.74 % likelihood that S Smith  will make  70 Runs in  148 balls over  210  Minutes 
## There is a 16.49 % likelihood that S Smith  will make  148 Runs in  268 balls over 398  Minutes
batsmanRunsLikelihood("warnerTest.csv","Warner")
## Summary of  Warner 's runs scoring likelihood
## **************************************************
## 
## There is a 7.22 % likelihood that Warner  will make  155 Runs in  253 balls over 372  Minutes 
## There is a 62.89 % likelihood that Warner  will make  14 Runs in  21 balls over  32  Minutes 
## There is a 29.9 % likelihood that Warner  will make  65 Runs in  94 balls over 135  Minutes
batsmanRunsLikelihood("labuschagneTest.csv","M Labuschagne")
## Summary of  M Labuschagne 's runs scoring likelihood
## **************************************************
## 
## There is a 32.76 % likelihood that M Labuschagne  will make  74 Runs in  144 balls over 206  Minutes 
## There is a 55.17 % likelihood that M Labuschagne  will make  22 Runs in  37 balls over  54  Minutes 
## There is a 12.07 % likelihood that M Labuschagne  will make  168 Runs in  297 balls over 420  Minutes
batsmanRunsLikelihood("khwajaTest.csv","Khwaja")
## Summary of  Khwaja 's runs scoring likelihood
## **************************************************
## 
## There is a 64.94 % likelihood that Khwaja  will make  14 Runs in  29 balls over 42  Minutes 
## There is a 27.27 % likelihood that Khwaja  will make  79 Runs in  148 balls over  210  Minutes 
## There is a 7.79 % likelihood that Khwaja  will make  165 Runs in  351 balls over 515  Minutes

5i. Moving average of batsman

Smith and Warner’s moving average has been on a downward trend lately. Khwaja is playing well

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanMovingAverage("stevesmith.csv","S Smith")
batsmanMovingAverage("warner.csv","Warner")
batsmanMovingAverage("labuschagne.csv","M Labuschagne")
batsmanMovingAverage("khwaja.csv","Khwaja")

5j. Cumulative Average runs of batsman in career

Labuschagne, SMith and Warner havwe very good cumulative average

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanCumulativeAverageRuns("stevesmithTest.csv","S Smith")
batsmanCumulativeAverageRuns("warnerTest.csv","Warner")
batsmanCumulativeAverageRuns("labuschagneTest.csv","M Labuschagne")
batsmanCumulativeAverageRuns("khwajaTest.csv","Khwaja")

5k. Cumulative Average strike rate of batsman in career

Warner towers over the others in the cumulative strike rate, followed by Labuschagne and Smith

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanCumulativeStrikeRate("stevesmithTest.csv","S Smith")
batsmanCumulativeStrikeRate("warnerTest.csv","Warner")
batsmanCumulativeStrikeRate("labuschagneTest.csv","M Labuschagne")
batsmanCumulativeStrikeRate("khwajaTest.csv","Khwaja")

5l. Future Runs forecast

Here are plots that forecast how the batsman will perform in future. In this case 90% of the career runs trend is uses as the training set. the remaining 10% is the test set.

A Holt-Winters forecating model is used to forecast future performance based on the 90% training set. The forecated runs trend is plotted. The test set is also plotted to see how close the forecast and the actual matches

Take a look at the runs forecasted for the batsman below.

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

batsmanPerfForecast("stevesmithTest.csv","S Smith")
batsmanPerfForecast("warnerTest.csv","Warner")
batsmanPerfForecast("labuschagneTest.csv","M Labuschagne")
batsmanPerfForecast("khwajaTest.csv","Khwaja")

5m. Relative Mean Strike Rate plot

The plot below compares the Mean Strike Rate of the batsman for each of the runs ranges of 10 and plots them. The plot indicate the following

frames <- list("stevesmithTest.csv","warnerTest.csv","khwajaTest.csv","labuschagneTest.csv","cgreenTest.csv")
names <- list("S Smith","Warner","Khwaja","Labuschagne","C Green")
relativeBatsmanSR(frames,names)

5n. Relative Runs Frequency plot

The plot below gives the relative Runs Frequency Percetages for each 10 run bucket. The plot below show

frames <- list("stevesmithTest.csv","warnerTest.csv","khwajaTest.csv","labuschagneTest.csv","cgreenTest.csv")
names <- list("S Smith","Warner","Khwaja","Labuschagne","C Green")
relativeRunsFreqPerf(frames,names)

5o. Relative cumulative average runs in career

frames <- list("stevesmithTest.csv","warnerTest.csv","khwajaTest.csv","labuschagneTest.csv","cgreenTest.csv")
names <- list("S Smith","Warner","Khwaja","Labuschagne","C Green")
relativeBatsmanCumulativeAvgRuns(frames,names)

5p. Relative cumulative average strike rate in career

frames <- list("stevesmithTest.csv","warnerTest.csv","khwajaTest.csv","labuschagneTest.csv","cgreenTest.csv")
names <- list("S Smith","Warner","Khwaja","Labuschagne","C Green")
relativeBatsmanCumulativeStrikeRate(frames,names)

5q. Check Batsman In-Form or Out-of-Form

The below computation uses Null Hypothesis testing and p-value to determine if the batsman is in-form or out-of-form. For this 90% of the career runs is chosen as the population and the mean computed. The last 10% is chosen to be the sample set and the sample Mean and the sample Standard Deviation are caculated.

The Null Hypothesis (H0) assumes that the batsman continues to stay in-form where the sample mean is within 95% confidence interval of population mean The Alternative (Ha) assumes that the batsman is out of form the sample mean is beyond the 95% confidence interval of the population mean.

A significance value of 0.05 is chosen and p-value us computed If p-value >= .05 – Batsman In-Form If p-value < 0.05 – Batsman Out-of-Form

Note Ideally the p-value should be done for a population that follows the Normal Distribution. But the runs population is usually left skewed. So some correction may be needed. I will revisit this later

This is done for the Top 4 batsman

checkBatsmanInForm("stevesmith.csv","S Smith")
## [1] "**************************** Form status of S Smith ****************************\n\n Population size: 144  Mean of population: 53.76 \n Sample size: 17  Mean of sample: 45.65 SD of sample: 56.4 \n\n Null hypothesis H0 : S Smith 's sample average is within 95% confidence interval of population average\n Alternative hypothesis Ha : S Smith 's sample average is below the 95% confidence interval of population average\n\n S Smith 's Form Status: In-Form because the p value: 0.280533  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBatsmanInForm("warner.csv","Warner")
## [1] "**************************** Form status of Warner ****************************\n\n Population size: 164  Mean of population: 45.2 \n Sample size: 19  Mean of sample: 26.63 SD of sample: 44.62 \n\n Null hypothesis H0 : Warner 's sample average is within 95% confidence interval of population average\n Alternative hypothesis Ha : Warner 's sample average is below the 95% confidence interval of population average\n\n Warner 's Form Status: Out-of-Form because the p value: 0.042744  is less than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBatsmanInForm("labuschagne.csv","M Labuschagne")
## [1] "**************************** Form status of M Labuschagne ****************************\n\n Population size: 52  Mean of population: 59.56 \n Sample size: 6  Mean of sample: 29.67 SD of sample: 19.96 \n\n Null hypothesis H0 : M Labuschagne 's sample average is within 95% confidence interval of population average\n Alternative hypothesis Ha : M Labuschagne 's sample average is below the 95% confidence interval of population average\n\n M Labuschagne 's Form Status: Out-of-Form because the p value: 0.005239  is less than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBatsmanInForm("khwaja.csv","Khwaja")
## [1] "**************************** Form status of Khwaja ****************************\n\n Population size: 89  Mean of population: 41.62 \n Sample size: 10  Mean of sample: 53.1 SD of sample: 76.34 \n\n Null hypothesis H0 : Khwaja 's sample average is within 95% confidence interval of population average\n Alternative hypothesis Ha : Khwaja 's sample average is below the 95% confidence interval of population average\n\n Khwaja 's Form Status: In-Form because the p value: 0.677691  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"

5r. Predicting Runs given Balls Faced and Minutes at Crease

A multi-variate regression plane is fitted between Runs and Balls faced +Minutes at crease.

BF <- seq( 10, 400,length=15)
Mins <- seq(30,600,length=15)
newDF <- data.frame(BF,Mins)
ssmith1 <- batsmanRunsPredict("stevesmith.csv","S Smith",newdataframe=newDF)
warner1 <- batsmanRunsPredict("warner.csv","Warner",newdataframe=newDF)
khwaja1 <- batsmanRunsPredict("khwaja.csv","Khwaja",newdataframe=newDF)
labuschagne1 <- batsmanRunsPredict("labuschagne.csv","Labuschagne",newdataframe=newDF)
cgreen1 <- batsmanRunsPredict("cgreen.csv","C Green",newdataframe=newDF)
batsmen <-cbind(round(ssmith1$Runs),round(warner1$Runs),round(khwaja1$Runs),round(labuschagne1$Runs),round(cgreen1$Runs))
colnames(batsmen) <- c("S Smith","Warner","Khwaja","Labuschagne","C Green")
newDF <- data.frame(round(newDF$BF),round(newDF$Mins))
colnames(newDF) <- c("BallsFaced","MinsAtCrease")
predictedRuns <- cbind(newDF,batsmen)
predictedRuns
##    BallsFaced MinsAtCrease S Smith Warner Khwaja Labuschagne C Green
## 1          10           30       7     10     10           9      13
## 2          38           71      23     30     24          24      29
## 3          66          111      38     50     38          40      44
## 4          94          152      53     70     53          55      60
## 5         121          193      69     90     67          70      75
## 6         149          234      84    110     81          85      91
## 7         177          274     100    130     95         100     106
## 8         205          315     115    150    109         116     122
## 9         233          356     130    170    123         131     137
## 10        261          396     146    190    137         146     153
## 11        289          437     161    210    151         161     168
## 12        316          478     177    230    165         176     184
## 13        344          519     192    250    179         192     199
## 14        372          559     207    270    193         207     215
## 15        400          600     223    290    207         222     230

6. Analysis of Australia WTC batsmen from Jan 2016 – May 2023 against India

6a. Relative cumulative average runs in career

Labuschagne, Smith and C Green have good records against India

frames <- list("stevesmithTestInd.csv","warnerTestInd.csv","khwajaTestInd.csv","labuschagneTestInd.csv","cgreenTestInd.csv")
names <- list("S Smith","Warner","Khwaja","Labuschagne","C Green")
relativeBatsmanCumulativeAvgRuns(frames,names)

6b. Relative cumulative average strike rate in career

Warner, Labuschagne and Smith have a good strike rate against India

frames <- list("stevesmithTestInd.csv","warnerTestInd.csv","khwajaTestInd.csv","labuschagneTestInd.csv","cgreenTestInd.csv")
names <- list("S Smith","Warner","Khwaja","Labuschagne","C Green")
relativeBatsmanCumulativeStrikeRate(frames,names)

7. Analysis of India WTC bowlers from Jan 2016 – May 2023

7a Wickets frequency chart

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))
bowlerWktsFreqPercent("shamiTest.csv","Shami")
bowlerWktsFreqPercent("sirajTest.csv","Siraj")
bowlerWktsFreqPercent("ashwinTest.csv","Ashwin")
bowlerWktsFreqPercent("jadejaTest.csv","Jadeja")
bowlerWktsFreqPercent("shardulTest.csv","Shardul")

7b Wickets Runs chart

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerWktsRunsPlot("shamiTest.csv","Shami")
bowlerWktsRunsPlot("sirajTest.csv","Siraj")
bowlerWktsRunsPlot("ashwinTest.csv","Ashwin")
bowlerWktsRunsPlot("jadejaTest.csv","Jadeja")
bowlerWktsRunsPlot("shardulTest.csv","Shardul")

7c. Average wickets at different venues

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerAvgWktsGround("shamiTest.csv","Shami")
bowlerAvgWktsGround("sirajTest.csv","Siraj")
bowlerAvgWktsGround("ashwinTest.csv","Ashwin")
bowlerAvgWktsGround("jadejaTest.csv","Jadeja")
bowlerAvgWktsGround("shardulTest.csv","Shardul")

7d Average wickets against different opposition

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerAvgWktsOpposition("shamiTest.csv","Shami")
bowlerAvgWktsOpposition("sirajTest.csv","Siraj")
bowlerAvgWktsOpposition("ashwinTest.csv","Ashwin")
bowlerAvgWktsOpposition("jadejaTest.csv","Jadeja")
bowlerAvgWktsOpposition("shardulTest.csv","Shardul")

7e Cumulative average wickets taken

Ashwin’s performance has dropped over the years, while Siraj has been becoming better

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

bowlerCumulativeAvgWickets("shamiTest.csv","Shami")
bowlerCumulativeAvgWickets("sirajTest.csv","Siraj")
bowlerCumulativeAvgWickets("ashwinTest.csv","Ashwin")
bowlerCumulativeAvgWickets("jadejaTest.csv","Jadeja")
bowlerCumulativeAvgWickets("shardulTest.csv","Shardul")

7g Cumulative average economy rate

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerCumulativeAvgEconRate("shamiTest.csv","Shami")
bowlerCumulativeAvgEconRate("sirajTest.csv","Siraj")
bowlerCumulativeAvgEconRate("ashwinTest.csv","Ashwin")
bowlerCumulativeAvgEconRate("jadejaTest.csv","Jadeja")
bowlerCumulativeAvgEconRate("shardulTest.csv","Shardul")

7h Wicket forecast

Here are plots that forecast how the bowler will perform in future. In this case 90% of the career wickets trend is used as the training set. the remaining 10% is the test set.

A Holt-Winters forecasting model is used to forecast future performance based on the 90% training set. The forecasted wickets trend is plotted. The test set is also plotted to see how close the forecast and the actual matches

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerPerfForecast("shamiTest.csv","Shami")
#bowlerPerfForecast("sirajTest.csv","Siraj")
bowlerPerfForecast("ashwinTest.csv","Ashwin")
bowlerPerfForecast("jadejaTest.csv","Jadeja")
bowlerPerfForecast("shardulTest.csv","Shardul")

7i Relative Wickets Frequency Percentage

frames <- list("shamiTest.csv","sirajTest.csv","ashwinTest.csv","jadejaTest.csv","shardulTest.csv")
names <- list("Shami","Siraj","Ashwin","Jadeja","Shardul")
relativeBowlingPerf(frames,names)

7j Relative Economy Rate against wickets taken

frames <- list("shamiTest.csv","sirajTest.csv","ashwinTest.csv","jadejaTest.csv","shardulTest.csv")
names <- list("Shami","Siraj","Ashwin","Jadeja","Shardul")
relativeBowlingER(frames,names)

7k Relative cumulative average wickets of bowlers in career

Ashwin has the highest wickets followed by Jadeja against all teams

frames <- list("shamiTest.csv","sirajTest.csv","ashwinTest.csv","jadejaTest.csv","shardulTest.csv")
names <- list("Shami","Siraj","Ashwin","Jadeja","Shardul")
relativeBowlerCumulativeAvgWickets(frames,names)

7l Relative cumulative average economy rate of bowlers

Jadeja has the best economy rate followed by Ashwin

frames <- list("shamiTest.csv","sirajTest.csv","ashwinTest.csv","jadejaTest.csv","shardulTest.csv")
names <- list("Shami","Siraj","Ashwin","Jadeja","Shardul")
relativeBowlerCumulativeAvgEconRate(frames,names)

7m Check for bowler in-form/out-of-form

The below computation uses Null Hypothesis testing and p-value to determine if the bowler is in-form or out-of-form. For this 90% of the career wickets is chosen as the population and the mean computed. The last 10% is chosen to be the sample set and the sample Mean and the sample Standard Deviation are caculated.

The Null Hypothesis (H0) assumes that the bowler continues to stay in-form where the sample mean is within 95% confidence interval of population mean The Alternative (Ha) assumes that the bowler is out of form the sample mean is beyond the 95% confidence interval of the population mean.

A significance value of 0.05 is chosen and p-value us computed If p-value >= .05 – Batsman In-Form If p-value < 0.05 – Batsman Out-of-Form

Note Ideally the p-value should be done for a population that follows the Normal Distribution. But the runs population is usually left skewed. So some correction may be needed. I will revisit this later

Note: The check for the form status of the bowlers indicate

checkBowlerInForm("shami.csv","Shami")
## [1] "**************************** Form status of Shami ****************************\n\n Population size: 106  Mean of population: 1.93 \n Sample size: 12  Mean of sample: 1.33 SD of sample: 1.23 \n\n Null hypothesis H0 : Shami 's sample average is within 95% confidence interval \n        of population average\n Alternative hypothesis Ha : Shami 's sample average is below the 95% confidence\n        interval of population average\n\n Shami 's Form Status: In-Form because the p value: 0.058427  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBowlerInForm("siraj.csv","Siraj")
## [1] "**************************** Form status of Siraj ****************************\n\n Population size: 29  Mean of population: 1.59 \n Sample size: 4  Mean of sample: 0.25 SD of sample: 0.5 \n\n Null hypothesis H0 : Siraj 's sample average is within 95% confidence interval \n        of population average\n Alternative hypothesis Ha : Siraj 's sample average is below the 95% confidence\n        interval of population average\n\n Siraj 's Form Status: Out-of-Form because the p value: 0.002923  is less than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBowlerInForm("ashwin.csv","Ashwin")
## [1] "**************************** Form status of Ashwin ****************************\n\n Population size: 154  Mean of population: 2.77 \n Sample size: 18  Mean of sample: 2.44 SD of sample: 1.76 \n\n Null hypothesis H0 : Ashwin 's sample average is within 95% confidence interval \n        of population average\n Alternative hypothesis Ha : Ashwin 's sample average is below the 95% confidence\n        interval of population average\n\n Ashwin 's Form Status: In-Form because the p value: 0.218345  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBowlerInForm("jadeja.csv","Jadeja")
## [1] "**************************** Form status of Jadeja ****************************\n\n Population size: 108  Mean of population: 2.22 \n Sample size: 12  Mean of sample: 1.92 SD of sample: 2.35 \n\n Null hypothesis H0 : Jadeja 's sample average is within 95% confidence interval \n        of population average\n Alternative hypothesis Ha : Jadeja 's sample average is below the 95% confidence\n        interval of population average\n\n Jadeja 's Form Status: In-Form because the p value: 0.333095  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBowlerInForm("shardul.csv","Shardul")
## [1] "**************************** Form status of Shardul ****************************\n\n Population size: 13  Mean of population: 2 \n Sample size: 2  Mean of sample: 0.5 SD of sample: 0.71 \n\n Null hypothesis H0 : Shardul 's sample average is within 95% confidence interval \n        of population average\n Alternative hypothesis Ha : Shardul 's sample average is below the 95% confidence\n        interval of population average\n\n Shardul 's Form Status: Out-of-Form because the p value: 0.04807  is less than alpha=  0.05 \n *******************************************************************************************\n\n"

8. Analysis of India WTC bowlers from Jan 2016 – May 2023 against Australia

8a Relative cumulative average wickets of bowlers in career

Against Australia specifically Jadeja has the best record followed by Ashwin

frames <- list("shamiTestAus.csv","sirajTestAus.csv","ashwinTestAus.csv","jadejaTestAus.csv","shardulTestAus.csv")
names <- list("Shami","Siraj","Ashwin","Jadeja","Shardul")
relativeBowlerCumulativeAvgWickets(frames,names)

8b Relative cumulative average economy rate of bowlers

Jadeja has the best economy followed by Siraj, then Ashwin

frames <- list("shamiTestAus.csv","sirajTestAus.csv","ashwinTestAus.csv","jadejaTestAus.csv","shardulTestAus.csv")
names <- list("Shami","Siraj","Ashwin","Jadeja","Shardul")
relativeBowlerCumulativeAvgEconRate(frames,names)

8. Analysis of India WTC bowlers from Jan 2016 – May 2023

8a. Wickets frequency chart

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))
bowlerWktsFreqPercent("cumminsTest.csv","Cummins")
bowlerWktsFreqPercent("starcTest.csv","Starc")
bowlerWktsFreqPercent("hazzlewoodTest.csv","Hazzlewood")
bowlerWktsFreqPercent("todd.csv","Todd")
bowlerWktsFreqPercent("lyonTest.csv","N Lyon")

 8b. Wickets frequency chart

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerWktsRunsPlot("cumminsTest.csv","Cummins")
bowlerWktsRunsPlot("starcTest.csv","Starc")
bowlerWktsRunsPlot("hazzlewoodTest.csv","Hazzlewood")
bowlerWktsRunsPlot("todd.csv","Todd")
bowlerWktsRunsPlot("lyonTest.csv","N Lyon")

8c. Average wickets at different venues

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerAvgWktsGround("cumminsTest.csv","Cummins")
bowlerAvgWktsGround("starcTest.csv","Starc")
bowlerAvgWktsGround("hazzlewoodTest.csv","Hazzlewood")
bowlerAvgWktsGround("todd.csv","Todd")
bowlerAvgWktsGround("lyonTest.csv","N Lyon")

8d Average wickets against different opposition

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerAvgWktsOpposition("cumminsTest.csv","Cummins")
bowlerAvgWktsOpposition("starcTest.csv","Starc")
bowlerAvgWktsOpposition("hazzlewoodTest.csv","Hazzlewood")
bowlerAvgWktsOpposition("todd.csv","Todd")
bowlerAvgWktsOpposition("lyonTest.csv","N Lyon")

8e Cumulative average wickets taken

par(mfrow=c(2,2))
par(mar=c(4,4,2,2))

bowlerCumulativeAvgWickets("cumminsTest.csv","Cummins")
bowlerCumulativeAvgWickets("starcTest.csv","Starc")
bowlerCumulativeAvgWickets("hazzlewoodTest.csv","Hazzlewood")
bowlerCumulativeAvgWickets("todd.csv","Todd")
bowlerCumulativeAvgWickets("lyonTest.csv","N Lyon")

8g Cumulative average economy rate

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerCumulativeAvgEconRate("cumminsTest.csv","Cummins")
bowlerCumulativeAvgEconRate("starcTest.csv","Starc")
bowlerCumulativeAvgEconRate("hazzlewoodTest.csv","Hazzlewood")
bowlerCumulativeAvgEconRate("todd.csv","Todd")
bowlerCumulativeAvgEconRate("lyonTest.csv","N Lyon")

8f. Future Wickets forecast

Here are plots that forecast how the bowler will perform in future. In this case 90% of the career wickets trend is used as the training set. the remaining 10% is the test set.

A Holt-Winters forecasting model is used to forecast future performance based on the 90% training set. The forecated wickets trend is plotted. The test set is also plotted to see how close the forecast and the actual matches

par(mfrow=c(2,3))
par(mar=c(4,4,2,2))

bowlerPerfForecast("cumminsTest.csv","Cummins")
bowlerPerfForecast("starcTest.csv","Starc")
bowlerPerfForecast("hazzlewoodTest.csv","Hazzlewood")
bowlerPerfForecast("lyonTest.csv","N Lyon")

8i. Relative Wickets Frequency Percentage

frames <- list("cumminsTest.csv","starcTest.csv","hazzlewoodTest.csv","todd.csv","lyonTest.csv")
names <- list("Cummins","Starc","Hazzlewood","Todd","N Lyon")
relativeBowlingPerf(frames,names)

8j Relative Economy Rate against wickets taken

frames <- list("cumminsTest.csv","starcTest.csv","hazzlewoodTest.csv","todd.csv","lyonTest.csv")
names <- list("Cummins","Starc","Hazzlewood","Todd","N Lyon")
relativeBowlingER(frames,names)

8k Relative cumulative average wickets of bowlers in career

Cummins, Starc and Lyons are the best performers

frames <- list("cumminsTest.csv","starcTest.csv","hazzlewoodTest.csv","todd.csv","lyonTest.csv")
names <- list("Cummins","Starc","Hazzlewood","Todd","N Lyon")
relativeBowlerCumulativeAvgWickets(frames,names)

8l Relative cumulative average economy rate of bowlers

Hazzlewood, Cummins have the best economy against all oppostion

frames <- list("cumminsTest.csv","starcTest.csv","hazzlewoodTest.csv","todd.csv","lyonTest.csv")
names <- list("Cummins","Starc","Hazzlewood","Todd","N Lyon")
relativeBowlerCumulativeAvgEconRate(frames,names)

8o Check for bowler in-form/out-of-form

The below computation uses Null Hypothesis testing and p-value to determine if the bowler is in-form or out-of-form. For this 90% of the career wickets is chosen as the population and the mean computed. The last 10% is chosen to be the sample set and the sample Mean and the sample Standard Deviation are calculated.

The Null Hypothesis (H0) assumes that the bowler continues to stay in-form where the sample mean is within 95% confidence interval of population mean The Alternative (Ha) assumes that the bowler is out of form the sample mean is beyond the 95% confidence interval of the population mean.

A significance value of 0.05 is chosen and p-value us computed If p-value >= .05 – Batsman In-Form If p-value < 0.05 – Batsman Out-of-Form

Note Ideally the p-value should be done for a population that follows the Normal Distribution. But the runs population is usually left skewed. So some correction may be needed. I will revisit this later

Note: The check for the form status of the bowlers indicate

checkBowlerInForm("cummins.csv","Cummins")
## [1] "**************************** Form status of Cummins ****************************\n\n Population size: 81  Mean of population: 2.46 \n Sample size: 9  Mean of sample: 2 SD of sample: 1.5 \n\n Null hypothesis H0 : Cummins 's sample average is within 95% confidence interval \n        of population average\n Alternative hypothesis Ha : Cummins 's sample average is below the 95% confidence\n        interval of population average\n\n Cummins 's Form Status: In-Form because the p value: 0.190785  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBowlerInForm("starc.csv","Starc")
## [1] "**************************** Form status of Starc ****************************\n\n Population size: 126  Mean of population: 2.18 \n Sample size: 15  Mean of sample: 1.67 SD of sample: 1.18 \n\n Null hypothesis H0 : Starc 's sample average is within 95% confidence interval \n        of population average\n Alternative hypothesis Ha : Starc 's sample average is below the 95% confidence\n        interval of population average\n\n Starc 's Form Status: In-Form because the p value: 0.057433  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBowlerInForm("hazzlewood.csv","Hazzlewood")
## [1] "**************************** Form status of Hazzlewood ****************************\n\n Population size: 99  Mean of population: 2.04 \n Sample size: 12  Mean of sample: 1.67 SD of sample: 1.5 \n\n Null hypothesis H0 : Hazzlewood 's sample average is within 95% confidence interval \n        of population average\n Alternative hypothesis Ha : Hazzlewood 's sample average is below the 95% confidence\n        interval of population average\n\n Hazzlewood 's Form Status: In-Form because the p value: 0.204787  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"
checkBowlerInForm("lyon.csv","N Lyon")
## [1] "**************************** Form status of N Lyon ****************************\n\n Population size: 193  Mean of population: 2.08 \n Sample size: 22  Mean of sample: 2.95 SD of sample: 1.96 \n\n Null hypothesis H0 : N Lyon 's sample average is within 95% confidence interval \n        of population average\n Alternative hypothesis Ha : N Lyon 's sample average is below the 95% confidence\n        interval of population average\n\n N Lyon 's Form Status: In-Form because the p value: 0.975407  is greater than alpha=  0.05 \n *******************************************************************************************\n\n"

9. Analysis of Australia WTC bowlers from Jan 2016 – May 2023 against India

9a Relative cumulative average wickets of bowlers in career

Against India Lyon, Cummins and Hazzlewood have performed well

frames <- list("cumminsTestInd.csv","starcTestInd.csv","hazzlewoodTestInd.csv","lyonTestInd.csv")
names <- list("Cummins","Starc","Hazzlewood","N Lyon")
relativeBowlerCumulativeAvgWickets(frames,names)

9b Relative cumulative average economy rate of bowlers

Hazzlewood, Lyon have a good economy rate against India

frames <- list("cumminsTestInd.csv","starcTestInd.csv","hazzlewoodTestInd.csv","lyonTestInd.csv")
names <- list("Cummins","Starc","Hazzlewood","N Lyon")
relativeBowlerCumulativeAvgEconRate(frames,names)

10 Analysis of teams – India, Australia

#The data for India & Australia teams were obtained with the following calls

#indiaTest <-getTeamDataHomeAway(dir=".",teamView="bat",matchType="Test",file="indiaTest.csv",save=TRUE,teamName="India")
#australiaTest <- getTeamDataHomeAway(matchType="Test",file="australiaTest.csv",save=TRUE,teamName="Australia")

10a. Win-loss of India against all oppositions in Test cricket

Against Australia India has won 17 times, lost 60 and drawn 22 in Australia. At home India won 42, tied 2, lost 28 and drawn 24

teamWinLossStatusVsOpposition("indiaTest.csv",teamName="India",opposition=c("all"),homeOrAway=c("all"),matchType="Test",plot=TRUE)

10b. Win-loss of Australia against all oppositions in Test cricket

teamWinLossStatusVsOpposition("australiaTest.csv",teamName="Australia",opposition=c("all"),homeOrAway=c("all"),matchType="Test",plot=TRUE)

10c. Win-loss of India against Australia in Test cricket

Against Australia India has won 17 times, lost 60 and drawn 22 in Australia. At home India won 42, tied 2, lost 28 and drawn 24

teamWinLossStatusVsOpposition("indiaTest.csv",teamName="India",opposition=c("Australia"),homeOrAway=c("all"),matchType="Test",plot=TRUE)

10d. Win-loss of India at all away venues

At the Oval where WTC is going to be held India has won 4, lost 10 and drawn 10.

teamWinLossStatusAtGrounds("indiaTest.csv",teamName="India",opposition=c("all"),homeOrAway=c("away"),matchType="Test",plot=TRUE)

10d. Timeline of win-loss of India against Australia in Test cricket

plotTimelineofWinsLosses("indiaTest.csv",team="India",opposition=c("Australia"),
                         homeOrAway=c("away","neutral"), startDate="2016-01-01",endDate="2023-05-01")

11. Conclusion

The above analysis performs various analysis of India and Australia in home and away matches. While we know the performance of the player at India or Australia, we cannot judge how the match will progress in the neutral, swinging conditions of the Oval. Let us hope for a good match!

Feel free to try out your own analysis with cricketr. Have fun with cricketr!!

Also see

  1. GooglyPlusPlus: Win Probability using Deep Learning and player embeddings
  2. The common alphabet of programming languages
  3. Practical Machine Learning with R and Python – Part 5
  4. Deep Learning from first principles in Python, R and Octave – Part 4
  5. Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR
  6. Cricpy takes guard for the Twenty20s
  7. Using Reinforcement Learning to solve Gridworld
  8. Exploring Quantum Gate operations with QCSimulator

To see all posts click Index of posts

IPL 2023:GooglyPlusPlus now with by AI/ML models, near real-time analytics!

It is carnival time again as IPL 2023 is underway!! The new GooglyPlusPlus now includes AI/ML models for computing ball-by-ball Win Probability of matches and each individual player’s Win Probability Contribution (WPC). GooglyPlusPlus uses 2 ML models

  • Deep Learning (Tensorflow) – accuracy : 0.8584
  • Logistic Regression (glmnet-tidymodels) : 0.728

Besides, as before, GooglyPlusPlus will also include the usual near real-time analytics with the Shiny app being automatically updated with the previous day’s match data.

Note: The Win Probability Computation can also be done on a live feed of streaming data. Since, I don’t have access to live feeds, the app will show how Win Probability changed during the course of completed matches. For more details on Win Probability and Win Probability Contribution see my posts

GooglyPlusPlus has been also updated with all the latest T20 league’s match data. It includes data from BBL 2022, NTB 2022, CPL 2022, PSL 2023, ICC T20 2022 and now IPL 2023.

GooglyPlusPlus has the following functionality

  • Batsman tab: For detailed analysis of batsmen
  • Bowler tab: For detailed analysis of bowlers
  • Match tab: Analysis of individual matches, plot of Runs vs SR, Wickets vs ER in power play, middle and death overs, Win Probability Analysis of teams and Win Probability Contribution of players
  • Head-to-head tab: Detailed analysis of team-vs-team batting/bowling scorecard, batting, bowling performances, performances in power play, middle and death overs
  • Team performance tab: Analysis of team-vs-all other teams with batting /bowling scorecard, batting, bowling performances, performances in power play, middle and death overs
  • Optimisation tab: Allows one to pit batsmen vs bowlers and vice-versa. This tab also uses integer programming to optimise batting and bowling lineup
  • Batting analysis tab: Ranks batsmen using Runs or SR. Also plots performances of batsmen in power play, middle and death overs and plots them in a 4×4 grid
  • Bowling analysis tab: Ranks bowlers based on Wickets or ER. Also plots performances of bowlers in power play, middle and death overs and plots them in a 4×4 grid

Also note all these tabs and features are available for all T20 formats namely IPL, Intl. T20 (men, women), BBL, NTB, PSL, CPL, SSM.

Important note: It is possible, that at times, the Win Probability (Deep Learning) for some recent IPL matches will give an error. This is because I need to rebuild the models on a daily basis as the matches use player embeddings and there are new players. While I will definitely rebuild the models on weekends and whenever I find time, you may have to bear with this error occasionally.

Note: All charts are interactive, which means that you can hover, zoom-in, zoom-out, pan etc on the charts

The latest avatar of GooglyPlusPlus2023 is based on my R package yorkr with data from Cricsheet.

Check out the latest version of GooglyPlusPlus

Follow me on twitter for daily highlights @tvganesh_85

GooglyPlusPlus can analyse players, matches, teams, rank, compute win probability and much more.

Included below are some random analyses of IPL 2023 matches so far

A) Chennai Super Kings vs Gujarat Titans – 31 Mar 2023

GT won by 5 wickets ( 4 balls remaining)

a) Worm Wicket Chart

b) Ball-by-ball Win Probability (Logistic Regression) (side-by-side)

This model shows that CSK had the upper hand in the 2nd last over, before it changed to GT. More details on Win Probability and Win Probability Contribution in the posts given by the links above.

c) b) Ball-by-ball Win Probability (Logistic Regression) (overlapping)

Here the ball-by-ball win probability is overlapped. CSK and GT both had nearly the same probability of winning in the 2nd last over before GT edges CSK out

B) Punjab Kings vs Rajasthan Royals – 05 Apr 2023

This was a another closely fought match. PBKS won by 5 runs

a) Worm wicket chart

b) Batting partnerships

Shikhar Dhawan scored 86 runs

c) Ball-by-ball Win Probability using Deep Learning (overlapping)

PBKS was generally ahead in the win probability race

d) Batsman Win Probability Contribution

This plot shows how the different batsmen contributed to the Win Probability. We can see that Shikhar Dhawan has a highest win probability. He played a very sensible innings. Also it appears that there is no difference between Prabhsimran Singh and others, though he score 60 runs. This computation is based on when they come to bat and how the win probability changes when they get dismissed, as seen in the 2nd chart

C) Delhi Capitals vs Gujarat Titans – 4 Apr 2023

GT won by 6 wickets (11 balls remaining)

a) Worm wicket chart

b) Runs scored across 20 overs

c) Runs vs SR plot

d) Batting scorecard (Gujarat Titans)

e) Batsman Win Probability Contribution (Gujarat Titans)

Miller has a higher percentage in the Win Contribution than Sai Sudershan who held the innings together.Strange are the ways of the ML models!!

D) Sunrisers Hyderabad vs Lucknow Supergiants ( 7 Apr 2023)

LSG won by 5 wickets (24 balls left). SRH were bamboozled by the pitch while LSG was able to cruise along

a) Worm wicket chart

b) Wickets vs ER plot

c) Wickets across 20 overs

d) Ball-by-ball win probability using Deep Learning (overlapping)

e) Bowler Win Probability Contribution (LSG)

Bishnoi has a higher win probability contribution than Krunal, though he just took 1 wicket to Krunal’s 3 wickets. This is based on how the Win Probability changed at that point in the game.

The above set of plots are just a random sample.

Note: There are 8 tabs each for 9 T20 leagues (BBL, CPL, T20 (men), T20 (women), IPL, PSL, NTB, SSM, WBB). So there are a lot more detailed charts/analses.

Do take GooglyPlusPlus for a test drive!!!

Follow me on twitter @tvganesh_85 for daily highlights of previous day matches

Take a look at some of my other posts

  1. Using Reinforcement Learning to solve Gridworld
  2. Deep Learning from first principles in Python, R and Octave – Part 6
  3. Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR
  4. Experiments with deblurring using OpenCV
  5. Singularity
  6. Practical Machine Learning with R and Python – Part 6
  7. Pitching yorkpy … short of good length to IPL – Part 1
  8. Analyzing performances of cricketers using cricketr template
  9. Cricpy takes guard for the Twenty20s
  10. Simulating an Edge Shape in Android

To see all posts click Index of posts

GooglyPlusPlus: Computing T20 player’s Win Probability Contribution

In this post, I compute each batsman’s or bowler’s Win Probability Contribution (WPC) in a T20 match. This metric captures by how much the player (batsman or bowler) changed/impacted the Win Probability of the T20 match. For this computation I use my machine learning models, I had created earlier, which predicts the ball-by-ball win probability as the T20 match progresses through the 2 innings of the match.

In the picture snippet below, you can see how the win probability changes ball-by-ball for each batsman for a T20 match between CSK vs LSG- 31 Mar 2022

In my previous posts I had created several Machine Learning models. In order to compute the player’s Win Probability contribution in this post, I have used the following ML models

The batsman’s or bowler’s win probability contribution changes ball-by=ball. The player’s contribution is calculated as the difference in win probability when the batsman faces the 1st ball in his innings and the last ball either when is out or the innings comes to an end. If the difference is +ve the the player has had a positive impact, and likewise for negative contribution. Similarly, for a bowler, it is the win probability when he/she comes into bowl till, the last delivery he/she bowls

Note: The Win Probability Contribution does not have any relation to the how much runs or at what strike rate the batsman scored the runs. Rather the model computes different win probability for each player, based on his/her embedding, the ball in the innings and six other feature vectors like runs, run rate, runsMomentum etc. These values change for every ball as seen in the table above. Also, this is not continuous. The 2 ML models determine the Win Probability for a specific player, ball and the context in the match.

This metric is similar to Win Probability Added (WPA) used in Sabermetrics for baseball. Here is the definition of WPA from Fangraphs “Win Probability Added (WPA) captures the change in Win Expectancy from one plate appearance to the next and credits or debits the player based on how much their action increased their team’s odds of winning.” This article in Fangraphs explains in detail how this computation is done.

In this post I have added 4 new function to my R package yorkr.

  • batsmanWinProbLR – batsman’s win probability contribution based on glmnet (Logistic Regression)
  • bowlerWinProbLR – bowler’s win probability contribution based on glmnet (Logistic Regression)
  • batsmanWinProbDL – batsman’s win probability contribution based on Deep Learning Model
  • bowlerWinProbDL – bowlerWinProbLR – bowler’s win probability contribution based on Deep Learning

Hence there are 4 additional features in GooglyPlusPlus based on the above 4 functions. In addition I have also updated

-winProbLR (overLap) function to include the names of batsman when they come to bat and when they get out or the innings comes to an end, based on Logistic Regression

-winProbDL(overLap) function to include the names of batsman when they come to bat and when they get out based on Deep Learning

Hence there are 6 new features in this version of GooglyPlusPlus.

Note: All these new 6 features are available for all 9 formats of T20 in GooglyPlusPlus namely

a) IPL b) BBL c) NTB d) PSL e) Intl, T20 (men) f) Intl. T20 (women) g) WBB h) CSL i) SSM

Check out the latest version of GooglyPlusPlus at gpp2023-2

Note: The data for GooglyPlusPlus comes from Cricsheet and the Shiny app is based on my R package yorkr

A) Chennai SuperKings vs Delhi Capitals – 04 Oct 2021

To understand Win Probability Contribution better let us look at Chennai Super Kings vs Delhi Capitals match on 04 Oct 2021

This was closely fought match with fortunes swinging wildly. If we take a look at the Worm wicket chart of this match

a) Worm Wicket chartCSK vs DC – 04 Oct 2021

Delhi Capitals finally win the match

b) Win Probability Logistic Regression (side-by-side) – CSK vs DC – 4 Oct 2021

Plotting how win probability changes over the course of the match using Logistic Regression Model

In this match Delhi Capitals won. The batting scorecard of Delhi Capitals

c) Batting Scorecard of Delhi Capitals – CSK vs DC – 4 Oct 2021

d) Win Probability Logistic Regression (Overlapping) – CSK vs DC – 4 Oct 2021

The Win Probability LR (overlapping) shows the probability function of both teams superimposed over one another. The plot includes when a batsman came into to play and when he got out. This is for both teams. This looks a little noisy, but there is a way to selectively display the change in Win Probability for each team. This can be done , by clicking the 3 arrows (orange or blue) from top to bottom. First double-click the team CSK or DC, then click the next 2 items (blue,red or black,grey) Sorry the legends don’t match the colors! 😦

Below we can see how the win probability changed for Delhi Capitals during their innings, as batsmen came into to play. See below

e) Batsman Win Probability contribution:DC – CSK vs DC – 4 Oct 2021

Computing the individual batsman’s Win Contribution and plotting we have. Hetmeyer has a higher Win Probability contribution than Shikhar Dhawan depsite scoring fewer runs

f) Bowler’s Win Probability contribution :CSK – CSK vs DC – 4 Oct 2021

We can also check the Win Probability of the bowlers. So for e.g the CSK bowlers and which bowlers had the most impact. Moeen Ali has the least impact in this match

B) Intl. T20 (men) Australia vs India – 25 Sep 2022

a) Worm wicket chart – Australia vs India – 25 Sep 2022

This was another close match in which India won with the penultimate ball

b) Win Probability based on Deep Learning model (side-by-side) – Australia vs India – 25 Sep 2022

c) Win Probability based on Deep Learning model (overlapping) – Australia vs India – 25 Sep 2022

The plot below shows how the Win Probability of the teams varied across the 20 overs. The 2 Win Probability distributions are superimposed over each other

d) Batsman Win Probability Contribution : IndiaAustralia vs India – 25 Sep 2022

Selectively choosing the India Win Probability plot by double-clicking legend ‘India’ on the right , followed by single click of black, grey legend we have

We see that Kohli, Suryakumar Yadav have good contribution to the Win Probability

e) Plotting the Runs vs Strike Rate:India – Australia vs India – 25 Sep 2022

f) Batsman’s Win Probability Contribution- Australia vs India – 25 Sep 2022

Finally plotting the Batsman’s Win Probability Contribution

Interestingly, Kohli has a greater Win Probability Contribution than SKY, though SKY scored more runs at a better strike rate. As mentioned above, the Win Probability is context dependent and also depends on past performances of the player (batsman, bowler)

Finally let us look at

C) India vs England Intll T20 Women (11 July 2021)

a) Worm wicket chart – India vs England Intl. T20 Women (11 July 2021)

India won this T20 match by 8 runs

b) Win Probability using the Logistic Regression Model – India vs England Intl. T20 Women (11 July 2021)

c) Win Probability with the DL model – India vs England Intl. T20 Women (11 July 2021)

d) Bowler Win Probability Contribution with the LR model India vs England Intl. T20 Women (11 July 2021)

e) Bowler Win Contribution with the DL model India vs England Intl. T20 Women (11 July 2021)

Go ahead and try out the latest version of GooglyPlusPlus

Also see my other posts

  1. Deep Learning from first principles in Python, R and Octave – Part 8
  2. A method to crowd source pothole marking on (Indian) roads
  3. Big Data 7: yorkr waltzes with Apache NiFi
  4. Practical Machine Learning with R and Python – Part 6
  5. Introducing cricpy:A python package to analyze performances of cricketers
  6. Revisiting World Bank data analysis with WDI and gVisMotionChart
  7. Literacy in India – A deepR dive
  8. Cricketr learns new tricks : Performs fine-grained analysis of players
  9. Presentation on “Intelligent Networks, CAMEL protocol, services & applications”
  10. Adventures in LogParser, HTA and charts

To see all posts click Index of posts

T20 Win Probability using CTGANs, synthetic data

This should be my last post on computing T20 Win Probability. In this post I compute Win Probability using Augmented Data with the help of Conditional Tabular Generative Adversarial Networks (CTGANs).

A.Introduction

I started the computation of T20 match Win Probability in my earlier post

a) ‘Computing Win-Probability of T20 matches‘ where I used

  • vanilla Logistic Regression to get an accuracy of 0.67,
  • Random Forest with Tidy models gave me an accuracy of 0.737
  • Deep Learning with Keras also with 0.73.

This was done without player embeddings

b) Next I used player embeddings for batsmen and bowlers in my post Boosting Win Probability accuracy with player embeddings , and my accuracies improved significantly

  • glmnet : accuracy – 0.728 and roc_auc – 0.81
  • random forest : accuracy – 0.927 and roc_auc – 0.98
  • mlp-dnn :accuracy – 0.762 and roc_auc – 0.854

c) Third I tried using Deep Learning with Keras using player embeddings

  • DL network gave an accuracy of 0.8639

This was lightweight and could be easily deployed in my Shiny GooglyPlusPlus app as opposed to the Tidymodel’s Random Forest, which was bulky and slow.

d) Finally I decided to try and improve the accuracy of my Deep Learning Model using Synthetic data. Towards this end, my explorations led me to Conditional Tabular Generative Adversarial Networks (CTGANs). CTGAN are GAN networks that can be used with Tabular data as GAN models are not useful with tabular data. However, the best performance I got for

  • DL Keras Model + Synthetic data : accuracy =0.77

The poorer accuracy was because CTGAN requires enormous computing power (GPUs) and RAM. The free version of Colab, Kaggle kept crashing when I tried with even 0.1 % of my 1.2 million dataset size. Finally, I tried with just 0.05% and was able to generate synthetic data. Most likely, it is the small sample size and the smaller number of epochs could be the reason for the poor result. In any case, it was worth trying and this approach would possibly work with sufficient computing resources.

B.Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GANs) was the brain child of Ian Goodfellow who demonstrated it in 2014. GANs are capable of generating synthetic text, tables, images, videos using available data. In Adversarial nets framework, the generative model is pitted against an adversary: a
discriminative model that learns to determine whether a sample is from the model distribution or the
data distribution.

GANs have 2 Deep Neural Networks , the Generator and Discriminator which compete against other

  • The Generator (Counterfeiter) takes random noise as input and generates fake images, tables, text. The generator learns to generate plausible data. The generated instances become negative training examples for the discriminator.
  • The Discriminator (Police) which tries to distinguish between the real and fake images, text. The discriminator learns to distinguish the generator’s fake data from real data. The discriminator penalises the generator for producing implausible results.

A pictorial representation of the GAN model can be shown below

Theoretically best performance of GANs are supposed to happen when the network reaches the ‘Nash equilibrium‘, i.e. when the Generator produces near fake images and the Discriminator’s loss is f ~0.5 i.e. the discriminator is unable to distinguish between real and fake images.

Note: Though I have mentioned T20 data in the above GAN model, the T20 tabular data is actually used in CTGAN which is slightly different from the above. See Reference 2) below.

C. Conditional Tabular Generative Adversial Networks (CTGANs)

“Modeling the probability distribution of rows in tabular data and generating realistic synthetic data is a non-trivial task. Tabular data usually contains a mix of discrete and continuous columns. Continuous columns may have multiple modes whereas discrete columns are sometimes imbalanced making the modeling difficult.” CTGANs handle these challenges.

I came upon CTGAN after spending some time exploring GANs via blogs, videos etc. For building the model I use real T20 match data. However, CTGAN requires immense raw computing power and a lot of RAM. My initial attempts on Colab, my Mac (12 core, 32GB RAM), took forever before eventually crashing, I switched to Kaggle and used GPUs. Still I was only able to use only a miniscule part of my T20 dataset. My match data has 1.2 million rows, hoanything > 0.05% resulted in Kaggle crashing. Since I was able to use only a fraction, I executed the CTGAN model over several iterations, each iteration with a random 0.05% sample of the dataset. At the end of each iterations I also generate synthetic dataset. Over 12 iterations, I generate close 360K of ‘synthetic‘ T20 match data.

I then augment the 1.2 million rows of ‘real‘ T20 match data with the generated ‘synthetic T20 match data to run my Deep Learning model

D. Executing the CTGAN model

a. Read the real T20 match data

!pip install ctgan
import pandas as pd
import ctgan
from ctgan import CTGAN
from numpy.random import seed

# Read the T20 match data
df = pd.read_csv('/kaggle/input/cricket1/t20.csv')

# Randomly sample 0.05% of the dataset. Note larger datasets cause the algo to crash
train_dataset = df.sample(frac=0.05)

# Print the real T20 match data
print(train_dataset.head(10))
print(train_dataset.shape)

             batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs   runRate  \
363695         3333        432      134             119   153  1.285714   
1082839        3881       1180      218              30    93  3.100000   
595799         2366        683      187              65   120  1.846154   
737614         4490       1381      148              87   144  1.655172   
410202          934       1003       19             106    35  1.842105   
525627          921       1711      251               1     8  8.000000   
657669         4718       1602      130             115   145  1.260870   
666461         4309       1989       44              87    38  0.863636   
651229         3336        754       30              92    36  1.200000   
709892         3048        421       97              28   119  1.226804   

            numWickets  runsMomentum  perfIndex  isWinner  
363695            0      0.092437  18.333333         1  
1082839           5      0.200000   4.736842         0  
595799            4      0.107692   9.566667         0  
737614            1      0.114943   9.130435         1  
410202            0      0.103774  20.263158         0  
525627            8      3.000000   3.837209         0  
657669            0      0.095652  19.555556         0  
666461            0      0.126437   9.500000         0  
651229            0      0.119565  13.200000         0  
709892            3      0.285714   9.814433         1  
(59956, 10)

b. Run CTGAN model on the real T20 data

import pandas as pd
import ctgan
from ctgan import CTGAN
from numpy.random import seed
from pickle import TRUE

df = pd.read_csv('/kaggle/input/cricket1/t20.csv')

#Specify the categorical features. batsmanIdx & bowlerIdx are player embeddings
categorical_features = ['batsmanIdx','bowlerIdx']

# Create a empty dataframe for synthetic data
df1 = pd.DataFrame()

# Loop for 12 iterations. Minimize generator & discriminator loss
for i in range(12):
    print(i)
    train_dataset = df.sample(frac=0.05)
    seed(33)

    ctgan = CTGAN(epochs=20,verbose=True,generator_lr=.001,discriminator_lr=.001,batch_size=1000)
    ctgan.fit(train_dataset, categorical_features)

    # Generate synthetic data
    samples = ctgan.sample(30000)

   # Concatenate the synthetic data after each iteration
    df1 = pd.concat([df1,samples])
    print(samples.head())
    print(df1.shape)

# Output the synthetic data to file
df1.to_csv("output1.csv",index=False)

0
Epoch 1, Loss G:  8.3825,Loss D: -0.6159
Epoch 2, Loss G:  3.5117,Loss D: -0.3016
Epoch 3, Loss G:  2.1619,Loss D: -0.5713
Epoch 4, Loss G:  0.9847,Loss D:  0.1010
Epoch 5, Loss G:  0.6198,Loss D:  0.0789
Epoch 6, Loss G:  0.1710,Loss D:  0.0959
Epoch 7, Loss G:  0.3236,Loss D: -0.1554
Epoch 8, Loss G:  0.2317,Loss D: -0.0765
Epoch 9, Loss G: -0.0127,Loss D:  0.0275
Epoch 10, Loss G:  0.1477,Loss D: -0.0353
Epoch 11, Loss G:  0.0997,Loss D: -0.0129
Epoch 12, Loss G:  0.0066,Loss D: -0.0486
Epoch 13, Loss G:  0.0351,Loss D: -0.0805
Epoch 14, Loss G: -0.1399,Loss D: -0.0021
Epoch 15, Loss G: -0.1503,Loss D: -0.0518
Epoch 16, Loss G: -0.2306,Loss D: -0.0234
Epoch 17, Loss G: -0.2986,Loss D:  0.0469
Epoch 18, Loss G: -0.1941,Loss D: -0.0560
Epoch 19, Loss G: -0.3794,Loss D:  0.0000
Epoch 20, Loss G: -0.2763,Loss D:  0.0368
   batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs   runRate  numWickets  \
0         906        224        8              75    81  1.955153           4   
1        4159        433       17              31   126  1.799280           9   
2         229        351      192              66    82  1.608527           5   
3        1926        962       63               0   117  1.658105           0   
4         286        431      128               1    36  1.605079           0   

   runsMomentum  perfIndex  isWinner  
0      0.146670   6.937595         1  
1      0.160534  10.904346         1  
2      0.516010  11.698128         1  
3      0.380986  11.914613         0  
4      0.112255   5.392120         0  
(30000, 10)
1
Epoch 1, Loss G:  7.9977,Loss D: -0.3592
Epoch 2, Loss G:  3.7418,Loss D: -0.3371
Epoch 3, Loss G:  1.6685,Loss D: -0.3211
Epoch 4, Loss G:  1.0539,Loss D: -0.3495
Epoch 5, Loss G:  0.4664,Loss D: -0.0907
Epoch 6, Loss G:  0.4004,Loss D: -0.1208
Epoch 7, Loss G:  0.3250,Loss D: -0.1482
Epoch 8, Loss G:  0.1753,Loss D:  0.0169
Epoch 9, Loss G:  0.1382,Loss D:  0.0661
Epoch 10, Loss G:  0.1509,Loss D: -0.1023
Epoch 11, Loss G: -0.0235,Loss D:  0.0210
Epoch 12, Loss G: -0.1636,Loss D: -0.0124
Epoch 13, Loss G: -0.3370,Loss D: -0.0185
Epoch 14, Loss G: -0.3054,Loss D: -0.0085
Epoch 15, Loss G: -0.5142,Loss D:  0.0121
Epoch 16, Loss G: -0.3813,Loss D: -0.0921
Epoch 17, Loss G: -0.5838,Loss D:  0.0210
Epoch 18, Loss G: -0.4033,Loss D: -0.0181
Epoch 19, Loss G: -0.5711,Loss D:  0.0269
Epoch 20, Loss G: -0.4828,Loss D: -0.0830
   batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs   runRate  numWickets  \
0        2202        265      223              39    13  0.868927           0   
1        3641        856       35              59    26  2.236160           6   
2         676       2903      218              93    16  0.460693           1   
3        3482       3459       44             117   102  0.851471           8   
4        3046       3076       59               5    84  1.016824           2   

   runsMomentum  perfIndex  isWinner  
0      0.138586   4.733462         0  
1      0.124453   5.146831         1  
2      0.273168  10.106869         0  
3      0.129520   5.361127         0  
4      1.083525  25.677574         1  
(60000, 10)
...
...
11
Epoch 1, Loss G:  8.8362,Loss D: -0.7111
Epoch 2, Loss G:  4.1322,Loss D: -0.8468
Epoch 3, Loss G:  1.2782,Loss D:  0.1245
Epoch 4, Loss G:  1.1135,Loss D: -0.3588
Epoch 5, Loss G:  0.6033,Loss D: -0.1255
Epoch 6, Loss G:  0.6912,Loss D: -0.1906
Epoch 7, Loss G:  0.3340,Loss D: -0.1048
Epoch 8, Loss G:  0.3515,Loss D: -0.0730
Epoch 9, Loss G:  0.1702,Loss D:  0.0237
Epoch 10, Loss G:  0.1064,Loss D:  0.0632
Epoch 11, Loss G:  0.0884,Loss D: -0.0005
Epoch 12, Loss G:  0.0556,Loss D: -0.0607
Epoch 13, Loss G: -0.0917,Loss D: -0.0223
Epoch 14, Loss G: -0.1492,Loss D:  0.0258
Epoch 15, Loss G: -0.0986,Loss D: -0.0112
Epoch 16, Loss G: -0.1428,Loss D: -0.0060
Epoch 17, Loss G: -0.2225,Loss D: -0.0263
Epoch 18, Loss G: -0.2255,Loss D: -0.0328
Epoch 19, Loss G: -0.3482,Loss D:  0.0277
Epoch 20, Loss G: -0.2667,Loss D: -0.0721
   batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs   runRate  numWickets  \
0         367       1447      129              27    30  1.242120           2   
1        2481       1528      221               4    10  1.344024           2   
2        1034       3116      132              87   153  1.142750           3   
3        1201       2868      151              60   136  1.091638           1   
4        4327       3291      108              89    22  0.842775           2   

   runsMomentum  perfIndex  isWinner  
0      1.978739   6.393691         1  
1      0.539650   6.783990         0  
2      0.107156  12.154197         0  
3      3.193574  11.992059         0  
4      0.127507  12.210876         0  
(360000, 10)

E. Sample of the Synthetic data

synthetic_data = ctgan.sample(20000)
print(synthetic_data.head(100))

    batsmanIdx  bowlerIdx  ballNum  ballsRemaining  runs    runRate  \
0         1073       3059       72              72   149   2.230236   
1         3769       1443      106               7   137   0.881409   
2          448       3048      166               6   220   1.092504   
3         2969       1244      103              82   207  12.314862   
4          180       1372      125             111    14   1.310051   
..         ...        ...      ...             ...   ...        ...   
95        1521       1040      153               6   166   1.097363   
96        2366         62       25             114   119   0.910642   
97        3506       1736      100             118   140   1.640921   
98        3343       2347       47              54    50   0.696462   
99        1957       2888      136              27   153   1.315565   

    numWickets  runsMomentum  perfIndex  isWinner  
0            0      0.111707  17.466925         0  
1            1      0.130352  14.274113         0  
2            1      0.173541  11.076731         1  
3            1      0.218977   6.239951         0  
4            4      2.829380   9.183323         1  
..         ...           ...        ...       ...  
95           0      0.223437   7.011180         0  
96           1      0.451371  16.908120         1  
97           5      0.156936   9.217205         0  
98           6      0.124536   6.273091         0  
99           1      0.249329  14.221554         0  

[100 rows x 10 columns]

F. Evaluating the synthetic T20 match data

Here the quality of the synthetic data set is evaluated.

a) Statistical evaluation

  • Read the real T20 match data
  • Read the generated T20 synthetic match data
import pandas as pd

# Read the T20 match and synthetic match data
df = pd.read_csv('/kaggle/input/cricket1/t20.csv').  #1.2 million rows
synthetic=pd.read_csv('/kaggle/input/synthetic/synthetic.csv')   #300K

# Randomly sample 1000 rows, and generate stats
df1=df.sample(n=1000)
real=df1.describe()
realData_stats=real.transpose
print(realData_stats)

synthetic1=synthetic.sample(n=1000)
synthetic=synthetic1.describe()
syntheticData_stats=synthetic.transpose
syntheticData_stats

a) Stats of real T20 match data

<bound method DataFrame.transpose of         batsmanIdx    bowlerIdx      ballNum  ballsRemaining         runs  \
count  1000.000000  1000.000000  1000.000000     1000.000000  1000.000000   
mean   2323.940000  1776.481000   118.165000       59.236000    77.649000   
std    1329.703046  1011.470703    70.564291       35.312934    49.098763   
min       8.000000    13.000000     1.000000        1.000000    -2.000000   
25%    1134.750000   850.000000    58.000000       28.750000    39.000000   
50%    2265.000000  1781.500000   117.000000       59.000000    72.000000   
75%    3510.000000  2662.250000   178.000000       89.000000   111.000000   
max    4738.000000  3481.000000   265.000000      127.000000   246.000000   

           runRate   numWickets  runsMomentum    perfIndex     isWinner  
count  1000.000000  1000.000000   1000.000000  1000.000000  1000.000000  
mean      1.734979     2.614000      0.310568     9.580386     0.499000  
std       5.698104     2.267189      0.686171     4.530856     0.500249  
min      -2.000000     0.000000      0.071429     0.000000     0.000000  
25%       1.009063     1.000000      0.105769     6.666667     0.000000  
50%       1.272727     2.000000      0.141026     9.236842     0.000000  
75%       1.546891     4.000000      0.250000    12.146735     1.000000  
max     166.000000    10.000000     10.000000    30.800000     1.000000

b) Stats of Synthetic T20 match data

     
           batsmanIdx    bowlerIdx      ballNum  ballsRemaining         runs  \
count  1000.000000  1000.000000  1000.000000     1000.000000  1000.000000   
mean   2304.135000  1760.776000   116.081000       50.102000    74.357000   
std    1342.348684  1003.496003    72.019228       35.795236    48.103446   
min       2.000000    15.000000    -4.000000       -2.000000    -1.000000   
25%    1093.000000   881.000000    46.000000       18.000000    30.000000   
50%    2219.500000  1763.500000   116.000000       45.000000    75.000000   
75%    3496.500000  2644.750000   180.250000       77.000000   112.000000   
max    4718.000000  3481.000000   253.000000      124.000000   222.000000   

           runRate   numWickets  runsMomentum    perfIndex     isWinner  
count  1000.000000  1000.000000   1000.000000  1000.000000  1000.000000  
mean      1.637225     3.096000      0.336540     9.278073     0.507000  
std       1.691060     2.640408      0.502346     4.727677     0.500201  
min      -4.388339     0.000000      0.083351    -0.902991     0.000000  
25%       1.077789     1.000000      0.115770     5.731931     0.000000  
50%       1.369655     2.000000      0.163085     9.104328     1.000000  
75%       1.660477     5.000000      0.311586    12.619318     1.000000  
max      23.757001    10.000000      4.630908    29.829497     1.000000

c) Plotting the Generator and Discriminator loss

import pandas as pd

# CTGAN prints out a new line for each epoch
epochs_output = str(output).split('\n')

# CTGAN separates the values with commas
raw_values = [line.split(',') for line in epochs_output]
loss_values = pd.DataFrame(raw_values)[:-1] # convert to df and delete last row (empty)

# Rename columns
loss_values.columns = ['Epoch', 'Generator Loss', 'Discriminator Loss']

# Extract the numbers from each column 
loss_values['Epoch'] = loss_values['Epoch'].str.extract('(\d+)').astype(int)
loss_values['Generator Loss'] = loss_values['Generator Loss'].str.extract('([-+]?\d*\.\d+|\d+)').astype(float)
loss_values['Discriminator Loss'] = loss_values['Discriminator Loss'].str.extract('([-+]?\d*\.\d+|\d+)').astype(float)

# the result is a row for each epoch that contains the generator and discriminator loss
loss_values.head()

	Epoch	Generator Loss	Discriminator Loss
0	1	8.0158	-0.3840
1	2	4.6748	-0.9589
2	3	1.1503	-0.0066
3	4	1.5593	-0.8148
4	5	0.6734	-0.1425
5	6	0.5342	-0.2202
6	7	0.4539	-0.1462
7	8	0.2907	-0.0155
8	9	0.2399	0.0172
9	10	0.1520	-0.0236
import plotly.graph_objects as go

# Plot loss function
fig = go.Figure(data=[go.Scatter(x=loss_values['Epoch'], y=loss_values['Generator Loss'], name='Generator Loss'),
                      go.Scatter(x=loss_values['Epoch'], y=loss_values['Discriminator Loss'], name='Discriminator Loss')])


# Update the layout for best viewing
fig.update_layout(template='plotly_white',
                    legend_orientation="h",
                    legend=dict(x=0, y=1.1))

title = 'CTGAN loss function for T20 dataset - ' 
fig.update_layout(title=title, xaxis_title='Epoch', yaxis_title='Loss')
fig.show()

G. Qualitative evaluation of Synthetic data

a) Quality of continuous columns in synthetic data

KSComplement -This metric computes the similarity of a real column vs. a synthetic column in terms of the column shapes.The KSComplement uses the Kolmogorov-Smirnov statistic. Closer to 1.0 is good and 0 is worst

from sdmetrics.single_column import KSComplement
numerical_columns=['ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']
total_score = 0
for column_name in numerical_columns:
    column_score = KSComplement.compute(df[column_name], synthetic[column_name])
    total_score += column_score
    print('Column:', column_name, ', Score: ', column_score)

print('\nAverage: ', total_score/len(numerical_columns))

Column: ballNum , Score:  0.9502754283367316
Column: ballsRemaining , Score:  0.8770284103276166
Column: runs , Score:  0.9136464248633367
Column: runRate , Score:  0.9183841670732166
Column: numWickets , Score:  0.9016209114638712
Column: runsMomentum , Score:  0.8773491702213716
Column: perfIndex , Score:  0.9173808852778924

Average:  0.9079550567948624

b) Quality of categorical columns

This statistic measures the quality of generated categorical columns. 1 is best and 0 is worst

categorical_columns=['batsmanIdx','bowlerIdx']
from sdmetrics.single_column import TVComplement

total_score = 0
for column_name in categorical_columns:
    column_score = TVComplement.compute(df[column_name], synthetic[column_name])
    total_score += column_score
    print('Column:', column_name, ', Score: ', column_score)

print('\nAverage: ', total_score/len(categorical_columns))

Column: batsmanIdx , Score:  0.8436263499539245
Column: bowlerIdx , Score:  0.7356177407921669

Average:  0.7896220453730457

The performance is decent but not excellent. I was unable to execute more epochs as it it required larger than the memory allowed

c) Correlation similarity

This metric measures the correlation between a pair of numerical columns and computes the similarity between the real and synthetic data – it compares the trends of 2D distributions. Best 1.0 and 0.0 is worst

import itertools
from sdmetrics.column_pairs import CorrelationSimilarity

total_score = 0
total_pairs = 0
for pair in itertools.combinations(numerical_columns,2):
    col_A, col_B = pair
    score = CorrelationSimilarity.compute(df[[col_A, col_B]], synthetic[[col_A, col_B]])
    print('Columns:', pair, ' Score:', score)
    total_score += score
    total_pairs += 1

print('\nAverage: ', total_score/total_pairs)

Columns: ('ballNum', 'ballsRemaining')  Score: 0.7153942317384889
Columns: ('ballNum', 'runs')  Score: 0.8838043045134777
Columns: ('ballNum', 'runRate')  Score: 0.8710243133637056
Columns: ('ballNum', 'numWickets')  Score: 0.7978515509750435
Columns: ('ballNum', 'runsMomentum')  Score: 0.8956281260834316
Columns: ('ballNum', 'perfIndex')  Score: 0.9275145840528048
Columns: ('ballsRemaining', 'runs')  Score: 0.9566928975064546
Columns: ('ballsRemaining', 'runRate')  Score: 0.9127313819127167
Columns: ('ballsRemaining', 'numWickets')  Score: 0.6770737279315224
Columns: ('ballsRemaining', 'runsMomentum')  Score: 0.7939260278412358
Columns: ('ballsRemaining', 'perfIndex')  Score: 0.8694582252638351
Columns: ('runs', 'runRate')  Score: 0.999593795992159
Columns: ('runs', 'numWickets')  Score: 0.9510731832916608
Columns: ('runs', 'runsMomentum')  Score: 0.9956131422133428
Columns: ('runs', 'perfIndex')  Score: 0.9742931845536701
Columns: ('runRate', 'numWickets')  Score: 0.8859830711832263
Columns: ('runRate', 'runsMomentum')  Score: 0.9174744874779561
Columns: ('runRate', 'perfIndex')  Score: 0.9491100087911353
Columns: ('numWickets', 'runsMomentum')  Score: 0.8989709776329797
Columns: ('numWickets', 'perfIndex')  Score: 0.7178946968801441
Columns: ('runsMomentum', 'perfIndex')  Score: 0.9744441623018661

Average:  0.8840738134048025

d) Category coverage

This metric measures whether a synthetic column covers all the possible categories that are present in a real column. 1.0 is best , 0 is worst

from sdmetrics.single_column import CategoryCoverage

total_score = 0
for column_name in categorical_columns:
    column_score = CategoryCoverage.compute(df[column_name], synthetic[column_name])
    total_score += column_score
    print('Column:', column_name, ', Score: ', column_score)

print('\nAverage: ', total_score/len(categorical_columns))

Column: batsmanIdx , Score:  0.9533951919021509
Column: bowlerIdx , Score:  0.9913966160022942

Average:  0.9723959039522225

H. Augmenting the T20 match data set

In this final part I augment my T20 match data set with the generated synthetic T20 data set.

import pandas as pd
from numpy import savetxt
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np


from keras.layers import Input, Embedding, Flatten, Dense, Reshape, Concatenate, Dropout
from keras.models import Model
import matplotlib.pyplot as plt

# Read real and synthetic data
df = pd.read_csv('/kaggle/input/cricket1/t20.csv')
synthetic=pd.read_csv('/kaggle/input/synthetic/synthetic.csv')

# Augment the data. Concatenate real & synthetic data
df1=pd.concat([df,synthetic])

# Create training and test samples
print("Shape of dataframe=",df1.shape)
train_dataset = df1.sample(frac=0.8,random_state=0)
test_dataset = df1.drop(train_dataset.index)
train_dataset1 = train_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_dataset1 = test_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
train_dataset1
train_labels = train_dataset.pop('isWinner')
test_labels = test_dataset.pop('isWinner')
print(train_dataset1.shape)

a=train_dataset1.describe()
stats=a.transpose
print(a)

a) Create A Deep Learning Model in Keras

from numpy.random import seed
seed(33)
tf.random.set_seed(432)
# create input layers for each of the predictors
batsmanIdx_input = Input(shape=(1,), name='batsmanIdx')
bowlerIdx_input = Input(shape=(1,), name='bowlerIdx')
ballNum_input = Input(shape=(1,), name='ballNum')
ballsRemaining_input = Input(shape=(1,), name='ballsRemaining')
runs_input = Input(shape=(1,), name='runs')
runRate_input = Input(shape=(1,), name='runRate')
numWickets_input = Input(shape=(1,), name='numWickets')
runsMomentum_input = Input(shape=(1,), name='runsMomentum')
perfIndex_input = Input(shape=(1,), name='perfIndex')

no_of_unique_batman=len(df1["batsmanIdx"].unique()) 
print(no_of_unique_batman)
no_of_unique_bowler=len(df1["bowlerIdx"].unique()) 
print(no_of_unique_bowler)

embedding_size_bat = no_of_unique_batman ** (1/4)
print(embedding_size_bat)
embedding_size_bwl = no_of_unique_bowler ** (1/4)
print(embedding_size_bwl)

# create embedding layer for the categorical predictor
batsmanIdx_embedding = Embedding(input_dim=no_of_unique_batman+1, output_dim=16,input_length=1)(batsmanIdx_input)
print(batsmanIdx_embedding)
batsmanIdx_flatten = Flatten()(batsmanIdx_embedding)
print(batsmanIdx_flatten)
bowlerIdx_embedding = Embedding(input_dim=no_of_unique_bowler+1, output_dim=16,input_length=1)(bowlerIdx_input)
bowlerIdx_flatten = Flatten()(bowlerIdx_embedding)
print(bowlerIdx_flatten)
# concatenate all the predictors
x = keras.layers.concatenate([batsmanIdx_flatten,bowlerIdx_flatten, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input])
print(x.shape)
# add hidden layers
x = Dense(96, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(16, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(8, activation='relu')(x)
x = Dropout(0.1)(x)
# add output layer
output = Dense(1, activation='sigmoid', name='output')(x)
print(output.shape)
# create model
model = Model(inputs=[batsmanIdx_input,bowlerIdx_input, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input], outputs=output)
model.summary()
# compile model
#optimizer=keras.optimizers.Adam(learning_rate=.01, beta_1=0.1, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=True)
#optimizer=keras.optimizers.RMSprop(learning_rate=0.001, rho=0.2, momentum=0.2, epsilon=1e-07)
#optimizer=keras.optimizers.SGD(learning_rate=.01,momentum=0.1) #- Works without dropout
#optimizer = tf.keras.optimizers.RMSprop(0.01)
#optimizer=keras.optimizers.SGD(learning_rate=.01,momentum=0.1)
#optimizer=keras.optimizers.RMSprop(learning_rate=.005, rho=0.1, momentum=0, epsilon=1e-07)

optimizer=keras.optimizers.Adam(learning_rate=.015, beta_1=0.9, beta_2=0.999, epsilon=1e-07, amsgrad=True)

model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])

# train the model
history=model.fit([train_dataset1['batsmanIdx'],train_dataset1['bowlerIdx'],train_dataset1['ballNum'],train_dataset1['ballsRemaining'],train_dataset1['runs'],
           train_dataset1['runRate'],train_dataset1['numWickets'],train_dataset1['runsMomentum'],train_dataset1['perfIndex']], train_labels, epochs=20, batch_size=1024,
          validation_data = ([test_dataset1['batsmanIdx'],test_dataset1['bowlerIdx'],test_dataset1['ballNum'],test_dataset1['ballsRemaining'],test_dataset1['runs'],
           test_dataset1['runRate'],test_dataset1['numWickets'],test_dataset1['runsMomentum'],test_dataset1['perfIndex']],test_labels), verbose=1)

plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "test"], loc="upper left")
plt.show()

==================================================================================================
Total params: 144,497
Trainable params: 144,497
Non-trainable params: 0
__________________________________________________________________________________________________
Epoch 1/20
1219/1219 [==============================] - 15s 11ms/step - loss: 0.6285 - accuracy: 0.6372 - val_loss: 0.5164 - val_accuracy: 0.7606
Epoch 2/20
1219/1219 [==============================] - 14s 11ms/step - loss: 0.5594 - accuracy: 0.7121 - val_loss: 0.4920 - val_accuracy: 0.7663
Epoch 3/20
1219/1219 [==============================] - 14s 12ms/step - loss: 0.5338 - accuracy: 0.7244 - val_loss: 0.4541 - val_accuracy: 0.7878
Epoch 4/20
1219/1219 [==============================] - 14s 11ms/step - loss: 0.5176 - accuracy: 0.7317 - val_loss: 0.4226 - val_accuracy: 0.7933
Epoch 5/20
1219/1219 [==============================] - 13s 11ms/step - loss: 0.4966 - accuracy: 0.7420 - val_loss: 0.4547 - val_accuracy: 0.7
...
...
poch 18/20
1219/1219 [==============================] - 14s 11ms/step - loss: 0.4300 - accuracy: 0.7747 - val_loss: 0.3536 - val_accuracy: 0.8288
Epoch 19/20
1219/1219 [==============================] - 14s 12ms/step - loss: 0.4269 - accuracy: 0.7766 - val_loss: 0.3565 - val_accuracy: 0.8302
Epoch 20/20
1219/1219 [==============================] - 14s 11ms/step - loss: 0.4259 - accuracy: 0.7775 - val_loss: 0.3498 - val_accuracy: 0.831

As can be seen the accuracy with augmented dataset is around 0.77, while without it I was getting 0.867 with just the real data. This degradation is probably due to the folllowing reasons

  • Only a fraction of the dataset was used for training. This was not representative of the data distribution for CTGAN to correctly synthesise data
  • The number of epochs had to be kept low to prevent Kaggle/Colab from crashing

I. Conclusion

This post shows how we can generate synthetic T20 match data to augment real T20 match data. Assuming we have sufficient processing power we should be able to generate synthetic data for augmenting our data set. This should improve the accuracy of the Win Probabily Deep Learning model.

References

  1. Generative Adversarial Networks – Ian Goodfellow et al.
  2. Modeling Tabular data using Conditional GAN
  3. Introduction to GAN
  4. Ian Goodfellow: Generative Adversarial Networks (GANs) | Lex Fridman Podcast
  5. CTGAN
  6. Tabular Synthetic Data Generation using CTGAN
  7. CTGAN Model
  8. Interpreting the Progress of CTGAN
  9. CTGAN metrics

Also see

  1. Using embeddings, collaborative filtering with Deep Learning to analyse T20 players
  2. Using Reinforcement Learning to solve Gridworld
  3. Deep Learning from first principles in Python, R and Octave – Part 4
  4. Practical Machine Learning with R and Python – Part 5
  5. Cricketr adds team analytics to its repertoire!!!
  6. yorkpy takes a hat-trick, bowls out Intl. T20s, BBL and Natwest T20!!!
  7. Deconstructing Convolutional Neural Networks with Tensorflow and Keras
  8. My TEDx talk on the “Internet of Things”
  9. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
  10. The Anomaly

To see all posts click Index of posts

GooglyPlusPlus: Win Probability using Deep Learning and player embeddings

In my last post ‘GooglyPlusPlus now with Win Probability Analysis for all T20 matches‘ I had discussed the performance of my ML models, created with and without player embeddings, in computing the Win Probability of T20 matches. With batsman & bowler embeddings I got much better performance than without the embeddings

  • glmnet – Accuracy – 0.73
  • Random Forest (RF) – Accuracy – 0.92

While the Random Forest gave excellent accuracy, it was bulky and also took an unusually long time to predict the Win Probability of a single T20 match. The above 2 ML models were built using R’s Tidymodels. glmnet was fast, but I wanted to see if I could create a ML model that was better, lighter and faster. I had initially tried to use Tensorflow, Keras in Python but then abandoned it, since I did not know how to port the Deep Learning model to R and use in my app GooglyPlusPlus.

But later, since I was stuck with a bulky Random Forest model, I decided to again explore options for saving the Keras Deep Learning model and loading it in R. I found out that saving the model as .h5, we can load it in R and use it for predictions. Hence, I rebuilt a Deep Learning model using Keras, Python with player embeddings and I got excellent performance. The DL model was light and had an accuracy 0.8639 with an ROC_AUC of 0.964 which was great!

GooglyPlusPlus uses data from Cricsheet and is based on my R package yorkr

You can try out this latest version of GooglyPlusPlus at gpp2023-1

Here are the steps

A. Build a Keras Deep Learning model

a. Import necessary packages

import pandas as pd
import numpy as np
from zipfile import ZipFile
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras import regularizers
from pathlib import Path
import matplotlib.pyplot as plt

b, Upload the data of all 9 T20 leagues (BBL, CPL, IPL, T20 (men) , T20(women), NTB, CPL, SSM, WBB)

# Read all T20 leagues 
df1=pd.read_csv('t20.csv')
print("Shape of dataframe=",df1.shape)

# Create training and test data set
train_dataset = df1.sample(frac=0.8,random_state=0)
test_dataset = df1.drop(train_dataset.index)
train_dataset1 = train_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_dataset1 = test_dataset[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
train_dataset1

# Set the target data
train_labels = train_dataset.pop('isWinner')
test_labels = test_dataset.pop('isWinner')
train_dataset1

a=train_dataset1.describe()
stats=a.transpose
a

c. Create a Deep Learning ML model using batsman & bowler embeddings

import pandas as pd
import numpy as np
from keras.layers import Input, Embedding, Flatten, Dense
from keras.models import Model
from keras.layers import Input, Embedding, Flatten, Dense, Reshape, Concatenate, Dropout
from keras.models import Model

# Set seed
tf.random.set_seed(432)

# create input layers for each of the predictors
batsmanIdx_input = Input(shape=(1,), name='batsmanIdx')
bowlerIdx_input = Input(shape=(1,), name='bowlerIdx')
ballNum_input = Input(shape=(1,), name='ballNum')
ballsRemaining_input = Input(shape=(1,), name='ballsRemaining')
runs_input = Input(shape=(1,), name='runs')
runRate_input = Input(shape=(1,), name='runRate')
numWickets_input = Input(shape=(1,), name='numWickets')
runsMomentum_input = Input(shape=(1,), name='runsMomentum')
perfIndex_input = Input(shape=(1,), name='perfIndex')

# Set the embedding size as the 4th root of unique batsmen, bowlers
no_of_unique_batman=len(df1["batsmanIdx"].unique()) 
no_of_unique_bowler=len(df1["bowlerIdx"].unique()) 
embedding_size_bat = no_of_unique_batman ** (1/4)
embedding_size_bwl = no_of_unique_bowler ** (1/4)


# create embedding layer for the categorical predictor
batsmanIdx_embedding = Embedding(input_dim=no_of_unique_batman+1, output_dim=16,input_length=1)(batsmanIdx_input)
batsmanIdx_flatten = Flatten()(batsmanIdx_embedding)
bowlerIdx_embedding = Embedding(input_dim=no_of_unique_bowler+1, output_dim=16,input_length=1)(bowlerIdx_input)
bowlerIdx_flatten = Flatten()(bowlerIdx_embedding)

# concatenate all the predictors
x = keras.layers.concatenate([batsmanIdx_flatten,bowlerIdx_flatten, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input])

# add hidden layers
# Use dropouts for regularisation
x = Dense(64, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(32, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(16, activation='relu')(x)
x = Dropout(0.1)(x)
x = Dense(8, activation='relu')(x)
x = Dropout(0.1)(x)

# add output layer
output = Dense(1, activation='sigmoid', name='output')(x)
print(output.shape)

# create a DL model
model = Model(inputs=[batsmanIdx_input,bowlerIdx_input, ballNum_input, ballsRemaining_input, runs_input, runRate_input, numWickets_input, runsMomentum_input, perfIndex_input], outputs=output)
model.summary()

# compile model
optimizer=keras.optimizers.Adam(learning_rate=.01, beta_1=0.9, beta_2=0.999, epsilon=1e-07, decay=0.0, amsgrad=True)

model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'])

# train the model
history=model.fit([train_dataset1['batsmanIdx'],train_dataset1['bowlerIdx'],train_dataset1['ballNum'],train_dataset1['ballsRemaining'],train_dataset1['runs'],
           train_dataset1['runRate'],train_dataset1['numWickets'],train_dataset1['runsMomentum'],train_dataset1['perfIndex']], train_labels, epochs=40, batch_size=1024,
          validation_data = ([test_dataset1['batsmanIdx'],test_dataset1['bowlerIdx'],test_dataset1['ballNum'],test_dataset1['ballsRemaining'],test_dataset1['runs'],
           test_dataset1['runRate'],test_dataset1['numWickets'],test_dataset1['runsMomentum'],test_dataset1['perfIndex']],test_labels), verbose=1)

plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "test"], loc="upper left")
plt.show()

Model: "model_5"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to                     
==================================================================================================
 batsmanIdx (InputLayer)        [(None, 1)]          0           []                               
                                                                                                  
 bowlerIdx (InputLayer)         [(None, 1)]          0           []                               
                                                                                                  
 embedding_10 (Embedding)       (None, 1, 16)        75888       ['batsmanIdx[0][0]']             
                                                                                                  
 embedding_11 (Embedding)       (None, 1, 16)        55808       ['bowlerIdx[0][0]']              
                                                                                                  
 flatten_10 (Flatten)           (None, 16)           0           ['embedding_10[0][0]']           
                                                                                                  
 flatten_11 (Flatten)           (None, 16)           0           ['embedding_11[0][0]']           
                                                                                                  
 ballNum (InputLayer)           [(None, 1)]          0           []                               
                                                                                                  
 ballsRemaining (InputLayer)    [(None, 1)]          0           []                               
                                                                                                  
 runs (InputLayer)              [(None, 1)]          0           []                               
                                                                                                  
 runRate (InputLayer)           [(None, 1)]          0           []                               
                                                                                                  
 numWickets (InputLayer)        [(None, 1)]          0           []                               
                                                                                                  
 runsMomentum (InputLayer)      [(None, 1)]          0           []                               
                                                                                                  
 perfIndex (InputLayer)         [(None, 1)]          0           []                               
                                                                                                  
 concatenate_5 (Concatenate)    (None, 39)           0           ['flatten_10[0][0]',             
                                                                  'flatten_11[0][0]',             
                                                                  'ballNum[0][0]',                
                                                                  'ballsRemaining[0][0]',         
                                                                  'runs[0][0]',                   
                                                                  'runRate[0][0]',                
                                                                  'numWickets[0][0]',             
                                                                  'runsMomentum[0][0]',           
                                                                  'perfIndex[0][0]']              
                                                                                                  
 dense_19 (Dense)               (None, 64)           2560        ['concatenate_5[0][0]']          
                                                                                                  
 dropout_19 (Dropout)           (None, 64)           0           ['dense_19[0][0]']               
                                                                                                  
 dense_20 (Dense)               (None, 32)           2080        ['dropout_19[0][0]']             
                                                                                                  
 dropout_20 (Dropout)           (None, 32)           0           ['dense_20[0][0]']               
                                                                                                  
 dense_21 (Dense)               (None, 16)           528         ['dropout_20[0][0]']             
                                                                                                  
 dropout_21 (Dropout)           (None, 16)           0           ['dense_21[0][0]']               
                                                                                                  
 dense_22 (Dense)               (None, 8)            136         ['dropout_21[0][0]']             
                                                                                                  
 dropout_22 (Dropout)           (None, 8)            0           ['dense_22[0][0]']               
                                                                                                  
 output (Dense)                 (None, 1)            9           ['dropout_22[0][0]']             
                                                                                                  
==================================================================================================
Total params: 137,009
Trainable params: 137,009
Non-trainable params: 0
__________________________________________________________________________________________________
Epoch 1/40
937/937 [==============================] - 11s 10ms/step - loss: 0.5683 - accuracy: 0.6968 - val_loss: 0.4480 - val_accuracy: 0.7708
Epoch 2/40
937/937 [==============================] - 9s 10ms/step - loss: 0.4477 - accuracy: 0.7721 - val_loss: 0.4305 - val_accuracy: 0.7833
Epoch 3/40
937/937 [==============================] - 9s 10ms/step - loss: 0.4229 - accuracy: 0.7832 - val_loss: 0.3984 - val_accuracy: 0.7936
...
...
937/937 [==============================] - 10s 10ms/step - loss: 0.2909 - accuracy: 0.8627 - val_loss: 0.2943 - val_accuracy: 0.8613
Epoch 38/40
937/937 [==============================] - 10s 10ms/step - loss: 0.2892 - accuracy: 0.8633 - val_loss: 0.2933 - val_accuracy: 0.8621
Epoch 39/40
937/937 [==============================] - 10s 10ms/step - loss: 0.2889 - accuracy: 0.8638 - val_loss: 0.2941 - val_accuracy: 0.8620
Epoch 40/40
937/937 [==============================] - 10s 11ms/step - loss: 0.2886 - accuracy: 0.8639 - val_loss: 0.2929 - val_accuracy: 0.8621

d. Compute and plot the ROC-AUC for the above model

from sklearn.metrics import roc_curve

# Select a random sample set
tf.random.set_seed(59)
train = df1.sample(frac=0.9,random_state=0)
test = df1.drop(train_dataset.index)
test_dataset1 = test[['batsmanIdx','bowlerIdx','ballNum','ballsRemaining','runs','runRate','numWickets','runsMomentum','perfIndex']]
test_labels = test.pop('isWinner')

# Compute the predicted values
y_pred_keras = model.predict([test_dataset1['batsmanIdx'],test_dataset1['bowlerIdx'],test_dataset1['ballNum'],test_dataset1['ballsRemaining'],test_dataset1['runs'],
           test_dataset1['runRate'],test_dataset1['numWickets'],test_dataset1['runsMomentum'],test_dataset1['perfIndex']]).ravel()

# Compute TPR & FPR
fpr_keras, tpr_keras, thresholds_keras = roc_curve(test_labels, y_pred_keras)

fpr_keras, tpr_keras, thresholds_keras = roc_curve(test_labels, y_pred_keras)
from sklearn.metrics import auc

# Plot the Area Under the Curve (AUC)
auc_keras = auc(fpr_keras, tpr_keras)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()

The ROC_AUC for the Deep Learning Model is 0.946 as seen below

e. Save the Keras model for use in Python

from keras.models import Model
model.save("wpDL.h5")

f. Load the model in R using rhdf5 package for use in GooglyPlusPlus

library(rhdf5)
dl_model <- load_model_hdf5('wpDL.h5')

This was a huge success for me to be able to create the Deep Learning model in Python and use it in my Shiny app GooglyPlusPlus. The Deep Learning Keras model is light-weight and extremely fast.

The Deep Learning model has now been integrated into GooglyPlusPlus. Now you can check the Win Probability using both a) glmnet (Logistic Regression with lasso regularisation) b) Keras Deep Learning model with dropouts as regularisation

In addition I have created 2 features based on Win Probability (WP)

i) Win Probability (Side-by-side – Plot(interactive) : With this functionality the 1st and 2nd innings will be side-by-side. When the 1st innings is played by team 1, the Win Probability of team 2 = 100 – WP (team1). Similarly, when the 2nd innings is being played by team 2, the Win Probability of team1 = 100 – WP (team 2)

ii) Win Probability (Overlapping) – Plot (static): With this functionality the Win Probabilities of both team1(1st innings) & team 2 (2nd innings) are displayed overlapping, so that we can see how the probabilities vary ball-by-ball.

Note: Since the same UI is used for all match functions I had to re-use the Plot(interactive) and Plot(static) radio buttons for Win Probability (Side-by-side) and Win Probability(Overlapping) respectively

Here are screenshots using both ML models with both functionality for some random matches

B) ICC T20 Men World Cup – Netherland-South Africa- 2022-11-06

i) Match Worm wicket chart

ii) Win Probability with LR (Side-by-Side- Plot(interactive))

iii) Win Probability LR (Overlapping- Plot(static))

iv) Win Probability Deep Learning (Side-by-side – Plot(interactive)

In the 213th ball of the innings South Africa was slightly ahead of Netherlands. After that they crashed and burned!

v) Win Probability Deep Learning (Overlapping – Plot (static)

It can be seen that in the 94th ball of both innings South Africa was ahead of Netherlands before the eventual slump.

C) Intl. T20 (Women) India – New Zealand – 2020 – 02 – 27

Here is an interesting match between India and New Zealand T20 Women’s teams. NZ successfully chased the India’s total in a wildly swinging fortunes. See the charts below

i) Match Worm Wicket chart

ii) Win Probability with LR (Side-by-side – Plot (interactive)

iii) Win Probability with LR (Overlapping – Plot (static)

iv) Win Probability with DL model (Side-by-side – Plot (interactive))

v) Win Probability with DL model (Overlapping – Plot (static))

The above functionality in plotting the Win Probability using LR or DL with both options (Side-by-side or Overlapping) is available for all 9 T20 leagues currently supported by GooglyPlusPlus.

Go ahead and give gpp2023-1 a try!!!

Do also check out my other posts’

  1. Deep Learning from first principles in Python, R and Octave – Part 7
  2. Big Data 6: The T20 Dance of Apache NiFi and yorkpy
  3. Latency, throughput implications for the Cloud
  4. Design Principles of Scalable, Distributed Systems
  5. Cricpy adds team analytics to its arsenal!!
  6. Analyzing performances of cricketers using cricketr template
  7. Modeling a Car in Android
  8. Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket
  9. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
  10. Experiments with deblurring using OpenCV
  11. Using embeddings, collaborative filtering with Deep Learning to analyse T20 players

To see all posts click Index of posts

GooglyPlusPlus now with Win Probability Analysis for all T20 matches

In my 2 earlier posts Computing Win-Probability of T20 matches and Boosting Win Probability accuracy with player embeddings I had discussed the approaches to computing ball-by-ball Win Probability of a T20 match. My best ML models were.

  • glmnet – Logistic Regression(LR) with lasso regularization and penalty – Accuracy – 0.73
  • Random Forest (RF) – Accuracy – 0.92

Incidentally, both these models can be used on live streaming ball-by-ball data if available

I have now integrated the trained ML Logistic Regression model with penalty into my Shiny app GooglyPlusPlus. Unfortunately, the Random Forest model, besides being computationally intensive is also heavy-weight (1.29GB) when compared to LR model which is just 91.2 MB. So, I was not able to upload the Random Forest model to Shiny as the memory allowed exceeded that allowed in my paid subscription.

However, I will demonstrate the performance of both models, LR ( in my Web app) and RF (in my local machine). Incidentally the Random Forest model takes a long time to load and even longer (~90 secs) to compute the Win Probability of a T20 match, while the LR model computes in a few seconds. Interestingly, I find the LR model’s Win Probability more intuitive and explainable than the Random Forest. Possibly, the RF model overfits. I need to explore this more. Anyway, take a look at some interesting Win Probability Charts (fortune swings of teams!!!) over the course of the T20 match.

You can try out this latest version here at GooglyPlusPlus !!

Some major upsets in the ICC T20 World Cup, 2022

A) Netherlands vs South Africa – 2022-11-06

B) Zimbabwe vs Pakistan – 2022-10-27

1a) Netherlands vs South Africa – ICC 2022-11-06 (Worm-wicket chart)

Netherlands shocked South Africa and ended South Africa’s hopes for a place in the semi-finals. The match worm-wicket chart for this match is shown below

The 2 circled areas are where the South Africa lost the plot around the 8th over (~120+48=168) and 15th over (~120+90=210)

Around 205-215 ball of the innings South Africa started to lose

1b) Netherlands vs South Africa – ICC 2022-11-06 – Logistic Regression with regularisation (Shiny)

1c) 1b) Netherlands vs South Africa – ICC 2022-11-06 – Random Forest (not in Web app, local)

If you notice, for some reason, Random Forest model decided that Netherland was on the winning side, right from the start. Why would this happen? Possibly overfitting, I presume…

2a) Zimbabwe vs Pakistan – ICC 2022-10-27 Worm-wicket chart

Pakistan seemed to be cruising along with finally 11 runs in the last over, and for some reason they panicked and lost.

2a) Zimbabwe vs Pakistan -ICC 2022 – 2022-10-27 – Logistic Regression with regularisation (Shiny)

It can be seen that Pakistan did seem to have the upper hand , save the last over.

2a) Zimbabwe vs Pakistan ICC 2022-10-27 – Random Forest (not in Web app, local)

Again the Random Forest model implies that Zimbabwe was on a winning foot except in brief stretches for e.g ball 248 of the innings

So while the accuracy of Random Forest model is better by about ~20% I feel it is the Logistic Regression with penalty has generalised better and is more intuitive. Meanwhile, I will see if I can improve LR or try another model which can provide better accuracy besides generalising well

Henceforth, I will only be using the LR model that is in the Shiny app

3a) England vs New Zealand T20 Women – 2021-09-04

Another close match till the 15th over. After that England’s seems to have had a slower strike rate and lost

3b) England vs New Zealand T20 Women – 2021-09-04 – Logistic Regression

4a) Chennai Super Kings vs Gujarat Titans (IPL 2022) – Worm wicket chart

4a) Chennai Super Kings vs Gujarat Titans (IPL 2022) – Logistic Regression

5a) Islamabad United vs Peshawar Zalmi -2021-06-17 – Worm wicket chart

This match seems to be close, with both worms inter-twined almost all the way

5b) Islamabad United vs Peshawar Zalmi -2021-06-17 – Logistic Regression

According to the model Peshawar Zalmi lost the game around 14-15th over

Feel free to play around with the latest GooglyPlusPlus

Conclusion

Meanwhile I will try to come with a better model which executes fast, generalises well and is accurate. Tall order, no doubt!!!

Till such time play around with GooglyPlusPlus

Also check out my other posts

  1. Using embeddings, collaborative filtering with Deep Learning to analyse T20 players
  2. Computer Vision: Ramblings on derivatives, histograms and contours
  3. Deep Learning from first principles in Python, R and Octave – Part 4
  4. TWS-4: Gossip protocol: Epidemics and rumors to the rescue
  5. How to program – Some essential tips
  6. Cricpy performs granular analysis of players
  7. Analyzing World Bank data with WDI, googleVis Motion Charts
  8. Practical Machine Learning with R and Python – Part 5
  9. Presentation on “Intelligent Networks, CAMEL protocol, services & applications

To see all posts click Index of posts