Cricpy performs granular analysis of players

“Gold medals aren’t really made of gold. They’re made of sweat, determination, & a hard-to-find alloy called guts.” Dan Gable

“It doesn’t matter whether you are pursuing success in business, sports, the arts, or life in general: The bridge between wishing and accomplishing is discipline” Harvey Mackay

“I won’t predict anything historic. But nothing is impossible.” Michael Phelps

Introduction

In this post, I introduce 2 new functions in my Python package ‘cricpy’ (cricpy v0.20) see Introducing cricpy:A python package to analyze performances of cricketers which enable granular analysis of batsmen and bowlers. They are

  1. Step 1: getPlayerDataHA – This function is a wrapper around getPlayerData(), getPlayerDataOD() and getPlayerDataTT(), and adds an extra column ‘homeOrAway’ which says whether the match was played at home/away/neutral venues. A CSV file is created with this new column.
  2. Step 2: getPlayerDataOppnHA – This function allows you to slice & dice the data for batsmen and bowlers against specific oppositions, at home/away/neutral venues and between certain periods. This reducedsubset of data can be used to perform analyses. A CSV file is created as an output based on the parameters of opposition, home or away and the interval of time

Note All the existing cricpy functions can be used on this smaller fine-grained data set for a closer analysis of players

This post has been published in Rpubs and can be accessed at Cricpy performs granular analysis of players

You can download a PDF version of this post at Cricpy performs granular analysis of players

I have also updated the cricpy template with these lastest changes. See cricpy-template

1. Analyzing Rahul Dravid at 3 different stages of his career

The following functions analyze Rahul Dravid during 3 different periods of his illustrious career. a) 1st Jan 2001-1st Jan 2002 b) 1st Jan 2004-1st Jan 2005 c) 1st Jan 2009-1st Jan 2010

import cricpy.analytics as ca
# Get the homeOrAway dataset for Dravid in matches
# Note:Since I have already got the data I reuse the CSV file
#df=ca.getPlayerDataHA(28114,tfile="dravidTestHA.csv",matchType="Test")

# Get Dravid's data for 2001-02
df1=ca.getPlayerDataOppnHA(infile="dravidTestHA.csv",outfile="dravidTest2001.csv",startDate="2001-01-01",endDate="2002-01-01")

# Get Dravid's data for 2004-05
df2=ca.getPlayerDataOppnHA(infile="dravidTestHA.csv",outfile="dravidTest2004.csv", startDate="2004-01-01",endDate="2005-01-01")

# Get Dravid's data for 2009-10
df3=ca.getPlayerDataOppnHA(infile="dravidTestHA.csv",outfile="dravidTest2009.csv",startDate="2009-01-01",endDate="2010-01-01")

1a. Plot the performance of Dravid at venues during 2001,2004,2009

Note: Any of the cricpy functions can be used on the fine-grained subset of data as below.

import cricpy.analytics as ca
ca.batsmanAvgRunsGround("dravidTest2001.csv","Dravid-2001")

ca.batsmanAvgRunsGround("dravidTest2004.csv","Dravid-2004")

ca.batsmanAvgRunsGround("dravidTest2009.csv","Dravid-2009")


1b. Plot the performance of Dravid against different oppositions during 2001,2004,2009

import cricpy.analytics as ca
ca.batsmanAvgRunsOpposition("dravidTest2001.csv","Dravid-2001")

ca.batsmanAvgRunsOpposition("dravidTest2004.csv","Dravid-2004")


ca.batsmanAvgRunsOpposition("dravidTest2009.csv","Dravid-2009")


1c. Plot the relative cumulative average and relative strike rate of Dravid in 2001,2004,2009

The plot below compares Dravid’s cumulative strike rate and cumulative average during 3 different stages of his career

import cricpy.analytics as ca
frames=["dravidTest2001.csv","dravidTest2004.csv","dravidTest2009.csv"]
names=["Dravid-2001","Dravid-2004","Dravid-2009"]
ca.relativeBatsmanCumulativeAvgRuns(frames,names)

 

ca.relativeBatsmanCumulativeStrikeRate(frames,names)

2. Analyzing Virat Kohli’s performance against England in England in 2014 and 2018

The analysis below looks at Kohli’s performance against England in ‘away’ venues (England) in 2014 and 2018

import cricpy.analytics as ca
# Get the homeOrAway data for Kohli in Test matches
#df=ca.getPlayerDataHA(253802,tfile="kohliTestHA.csv",type="batting",matchType="Test")

# Get the homeOrAway data for Kohli in Test matches
df=ca.getPlayerDataHA(253802,tfile="kohliTestHA.csv",type="batting",matchType="Test")

# Get the subset if data of Kohli's performance against England in England in 2014
df=ca.getPlayerDataOppnHA(infile="kohliTestHA.csv",outfile="kohliTestEng2014.csv",  opposition=["England"],homeOrAway=["away"],startDate="2014-01-01",endDate="2015-01-01")

# Get the subset if data of Kohli's performance against England in England in 2018
df1=ca.getPlayerDataOppnHA(infile="kohliTestHA.csv",outfile="kohliTestEng2018.csv",
   opposition=["England"],homeOrAway=["away"],startDate="2018-01-01",endDate="2019-01-01")

2a. Kohli’s performance at England grounds in 2014 & 2018

Kohli had a miserable outing to England in 2014 with a string of low scores. In 2018 Kohli pulls himself out of the morass

import cricpy.analytics as ca
ca.batsmanAvgRunsGround("kohliTestEng2014.csv","Kohli-Eng-2014")
ca.batsmanAvgRunsGround("kohliTestEng2018.csv","Kohli-Eng-2018")


2a. Kohli’s cumulative average runs in 2014 & 2018

Kohli’s cumulative average runs in 2014 is in the low 15s, while in 2018 it is 70+. Kohli stamps his class back again and undoes the bad memories of 2014

import cricpy.analytics as ca
ca.batsmanCumulativeAverageRuns("kohliTestEng2014.csv", "Kohli-Eng-2014")

ca.batsmanCumulativeAverageRuns("kohliTestEng2018.csv", "Kohli-Eng-2018")

3a. Compare the performances of Ganguly, Dravid and VVS Laxman against opposition in ‘away’ matches in Tests

The analyses below compares the performances of Sourav Ganguly, Rahul Dravid and VVS Laxman against Australia, South Africa, and England in ‘away’ venues between 01 Jan 2002 to 01 Jan 2008

import cricpy.analytics as ca
#Get the HA data for Ganguly, Dravid and Laxman
#df=ca.getPlayerDataHA(28779,tfile="gangulyTestHA.csv",type="batting",matchType="Test")
#df=ca.getPlayerDataHA(28114,tfile="dravidTestHA.csv",type="batting",matchType="Test")
#df=ca.getPlayerDataHA(30750,tfile="laxmanTestHA.csv",type="batting",matchType="Test")

# Slice the data 
df=ca.getPlayerDataOppnHA(infile="gangulyTestHA.csv",outfile="gangulyTestAES2002-08.csv" ,opposition=["Australia", "England", "South Africa"],                        homeOrAway=["away"],startDate="2002-01-01",endDate="2008-01-01")
df=ca.getPlayerDataOppnHA(infile="dravidTestHA.csv",outfile="dravidTestAES2002-08.csv" ,opposition=["Australia", "England", "South Africa"],                        homeOrAway=["away"],startDate="2002-01-01",endDate="2008-01-01")
df=ca.getPlayerDataOppnHA(infile="laxmanTestHA.csv",outfile="laxmanTestAES2002-08.csv",opposition=["Australia", "England", "South Africa"],                       homeOrAway=["away"],startDate="2002-01-01",endDate="2008-01-01")

3b Plot the relative cumulative average runs and relative cumative strike rate

Plot the relative cumulative average runs and relative cumative strike rate of Ganguly, Dravid and Laxman

-Dravid towers over Laxman and Ganguly with respect to cumulative average runs. – Ganguly has a superior strike rate followed by Laxman and then Dravid

import cricpy.analytics as ca
frames=["gangulyTestAES2002-08.csv","dravidTestAES2002-08.csv","laxmanTestAES2002-08.csv"]
names=["GangulyAusEngSA2002-08","DravidAusEngSA2002-08","LaxmanAusEngSA2002-08"]
ca.relativeBatsmanCumulativeAvgRuns(frames,names)

ca.relativeBatsmanCumulativeStrikeRate(frames,names)

4. Compare the ODI performances of Rohit Sharma, Joe Root and Kane Williamson against opposition

Compare the performances of Rohit Sharma, Joe Root and Kane williamson in away & neutral venues against Australia, West Indies and Soouth Africa

  • Joe Root piles us the runs in about 15 matches. Rohit has played far more ODIs than the other two and averages a steady 35+
import cricpy.analytics as ca
# Get the ODI HA data for Rohit, Root and Williamson
#df=ca.getPlayerDataHA(34102,tfile="rohitODIHA.csv",type="batting",matchType="ODI")
#df=ca.getPlayerDataHA(303669,tfile="joerootODIHA.csv",type="batting",matchType="ODI")
#df=ca.getPlayerDataHA(277906,tfile="williamsonODIHA.csv",type="batting",matchType="ODI")

# Subset the data for specific opposition in away and neutral venues
## C:\Users\Ganesh\ANACON~1\lib\site-packages\statsmodels\compat\pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
##   from pandas.core import datetools
df=ca.getPlayerDataOppnHA(infile="rohitODIHA.csv",outfile="rohitODIAusWISA.csv"
                       ,opposition=["Australia", "West Indies", "South Africa"],
                      homeOrAway=["away","neutral"])
df=ca.getPlayerDataOppnHA(infile="joerootODIHA.csv",outfile="joerootODIAusWISA.csv"
                       ,opposition=["Australia", "West Indies", "South Africa"],
                       homeOrAway=["away","neutral"])
df=ca.getPlayerDataOppnHA(infile="williamsonODIHA.csv",outfile="williamsonODIAusWiSA.csv",opposition=["Australia", "West Indies", "South Africa"],                    homeOrAway=["away","neutral"])

4a. Compare cumulative strike rates and cumulative average runs of Rohit, Root and Williamson

The relative cumulative strike rate of all 3 are comparable

import cricpy.analytics as ca
frames=["rohitODIAusWISA.csv","joerootODIAusWISA.csv","williamsonODIAusWiSA.csv"]
names=["Rohit-ODI-AusWISA","Joe Root-ODI-AusWISA","Williamson-ODI-AusWISA"]
ca.relativeBatsmanCumulativeAvgRuns(frames,names)

ca.relativeBatsmanCumulativeStrikeRate(frames,names)

5. Plot the performance of Dhoni in T20s against specific opposition at all venues

Plot the performances of Dhoni against Australia, West Indies, South Africa and England

import cricpy.analytics as ca
# Get the HA T20 data for Dhoni
#df=ca.getPlayerDataHA(28081,tfile="dhoniT20HA.csv",type="batting",matchType="T20")
#Subset the data
df=ca.getPlayerDataOppnHA(infile="dhoniT20HA.csv",outfile="dhoniT20AusWISAEng.csv",opposition=["Australia", "West Indies", "South Africa","England"],                homeOrAway=["all"])

5a. Plot Dhoni’s performances in T20

Note You can use any of cricpy’s functions against the fine grained data

import cricpy.analytics as ca
ca.batsmanAvgRunsOpposition("dhoniT20AusWISAEng.csv","Dhoni")

ca.batsmanAvgRunsGround("dhoniT20AusWISAEng.csv","Dhoni")

ca.batsmanCumulativeStrikeRate("dhoniT20AusWISAEng.csv","Dhoni")

ca.batsmanCumulativeAverageRuns("dhoniT20AusWISAEng.csv","Dhoni")

6. Compute and performances of Anil Kumble, Muralitharan and Warne in ‘away’ test matches

Compute the performances of Kumble, Warne and Maralitharan against New Zealand, West Indies, South Africa and England in pitches that are not ‘home’ pithes

import cricpy.analytics as ca
# Get the bowling data for Kumble, Warne and Muralitharan in Test matches
#df=ca.getPlayerDataHA(30176,tfile="kumbleTestHA.csv",type="bowling",matchType="Test")
#df=ca.getPlayerDataHA(8166,tfile="warneTestHA.csv",type="bowling",matchType="Test")
#df=ca.getPlayerDataHA(49636,tfile="muraliTestHA.csv",type="bowling",matchType="Test")

# Subset the data
df=ca.getPlayerDataOppnHA(infile="kumbleTestHA.csv",outfile="kumbleTest-NZWISAEng.csv",opposition=["New Zealand", "West Indies", "South Africa","England"],
                       homeOrAway=["away"])

df=ca.getPlayerDataOppnHA(infile="warneTestHA.csv",outfile="warneTest-NZWISAEng.csv"
                       ,opposition=["New Zealand", "West Indies", "South Africa","England"], homeOrAway=["away"])

df=ca.getPlayerDataOppnHA(infile="muraliTestHA.csv",outfile="muraliTest-NZWISAEng.csv"
                       ,opposition=["New Zealand", "West Indies", "South Africa","England"], homeOrAway=["away"])

6a. Plot the average wickets of Kumble, Warne and Murali

import cricpy.analytics as ca
ca.bowlerAvgWktsOpposition("kumbleTest-NZWISAEng.csv","Kumble-NZWISAEng-AN")

ca.bowlerAvgWktsOpposition("warneTest-NZWISAEng.csv","Warne-NZWISAEng-AN")

ca.bowlerAvgWktsOpposition("muraliTest-NZWISAEng.csv","Murali-NZWISAEng-AN")

6b. Plot the average wickets in different grounds of Kumble, Warne and Murali

import cricpy.analytics as ca
ca.bowlerAvgWktsGround("kumbleTest-NZWISAEng.csv","Kumble")

ca.bowlerAvgWktsGround("warneTest-NZWISAEng.csv","Warne")

ca.bowlerAvgWktsGround("muraliTest-NZWISAEng.csv","Murali")

6c. Plot the cumulative average wickets and cumulative economy rate of Kumble, Warne and Murali

  • Murali has the best economy rate followed by Kumble and then Warne
  • Again Murali has the best cumulative average wickets followed by Warne and then Kumble
import cricpy.analytics as ca
frames=["kumbleTest-NZWISAEng.csv","warneTest-NZWISAEng.csv","muraliTest-NZWISAEng.csv"]
names=["Kumble","Warne","Murali"]
ca.relativeBowlerCumulativeAvgEconRate(frames,names)

ca.relativeBowlerCumulativeAvgWickets(frames,names)

7. Compute and plot the performances of Bumrah in 2016, 2017 and 2018 in ODIs

import cricpy.analytics as ca
# Get the HA data for Bumrah in ODI in bowling
#df=ca.getPlayerDataHA(625383,tfile="bumrahODIHA.csv",type="bowling",matchType="ODI")

# Slice the data for periods 2016, 2017 and 2018
df=ca.getPlayerDataOppnHA(infile="bumrahODIHA.csv",outfile="bumrahODI2016.csv",
                       startDate="2016-01-01",endDate="2017-01-01")

df=ca.getPlayerDataOppnHA(infile="bumrahODIHA.csv",outfile="bumrahODI2017.csv",
                       startDate="2017-01-01",endDate="2018-01-01")

df=ca.getPlayerDataOppnHA(infile="bumrahODIHA.csv",outfile="bumrahODI2018.csv",
                       startDate="2018-01-01",endDate="2019-01-01")

7a. Compute the performances of Bumrah in 2016, 2017 and 2018

  • Very clearly Bumrah is getting better at his art. His economy rate in 2018 is the best!!!
  • Bumrah has had a very prolific year in 2017. However all the years he seems to be quite effective
import cricpy.analytics as ca
frames=["bumrahODI2016.csv","bumrahODI2017.csv","bumrahODI2018.csv"]
names=["Bumrah-2016","Bumrah-2017","Bumrah-2018"]
ca.relativeBowlerCumulativeAvgEconRate(frames,names)

ca.relativeBowlerCumulativeAvgWickets(frames,names)

8. Compute and plot the performances of Shakib, Bumrah and Jadeja in T20 matches for bowling

import cricpy.analytics as ca
# Get the HA bowling data for Shakib, Bumrah and Jadeja
#df=ca.getPlayerDataHA(56143,tfile="shakibT20HA.csv",type="bowling",matchType="T20")
#df=ca.getPlayerDataHA(625383,tfile="bumrahT20HA.csv",type="bowling",matchType="T20")
#df=ca.getPlayerDataHA(234675,tfile="jadejaT20HA.csv",type="bowling",matchType="T20")

# Slice the data for performances against Sri Lanka, Australia, South Africa and England
df=ca.getPlayerDataOppnHA(infile="shakibT20HA.csv",outfile="shakibT20-SLAusSAEng.csv" ,opposition=["Sri Lanka","Australia", "South Africa","England"],
                       homeOrAway=["all"])
df=ca.getPlayerDataOppnHA(infile="bumrahT20HA.csv",outfile="bumrahT20-SLAusSAEng.csv",opposition=["Sri Lanka","Australia", "South Africa","England"],
                       homeOrAway=["all"])

df=ca.getPlayerDataOppnHA(infile="jadejaT20HA.csv",outfile="jadejaT20-SLAusSAEng.csv"                      ,opposition=["Sri Lanka","Australia", "South Africa","England"],   homeOrAway=["all"])

8a. Compare the relative performances of Shakib, Bumrah and Jadeja

  • Jadeja and Bumrah have comparable economy rates. Shakib is more expensive
  • Shakib pips Bumrah in number of cumulative wickets, though Bumrah is close behind
import cricpy.analytics as ca
frames=["shakibT20-SLAusSAEng.csv","bumrahT20-SLAusSAEng.csv","jadejaT20-SLAusSAEng.csv"]
names=["Shakib-SLAusSAEng","Bumrah-SLAusSAEng","Jadeja-SLAusSAEng"]
ca.relativeBowlerCumulativeAvgEconRate(frames,names)

ca.relativeBowlerCumulativeAvgWickets(frames,names)

Conclusion

By getting the homeOrAway data for players using the profileNo, you can slice and dice the data based on your choice of opposition, whether you want matches that were played at home/away/neutral venues. Finally by specifying the period for which the data has to be subsetted you can create fine grained analysis.

Hope you have a great time with cricpy!!!

Also see
1. My book ‘Cricket analytics with cricketr and cricpy’ is now on Amazon
2. The 3rd paperback & kindle editions of my books on Cricket, now on Amazon
3. Exploring Quantum Gate operations with QCSimulator
4. Deep Learning from first principles in Python, R and Octave – Part 6
5. Natural selection of database technology through the years
6. Pitching yorkpy … short of good length to IPL – Part 1
7. Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket
8. Practical Machine Learning with R and Python – Part 3

To see all posts click Index of posts

Getting started with Tensorflow, Keras in Python and R

The Pale Blue Dot

“From this distant vantage point, the Earth might not seem of any particular interest. But for us, it’s different. Consider again that dot. That’s here, that’s home, that’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there—on the mote of dust suspended in a sunbeam.”

Carl Sagan

Tensorflow and Keras are Deep Learning frameworks that really simplify a lot of things to the user. If you are familiar with Machine Learning and Deep Learning concepts then Tensorflow and Keras are really a playground to realize your ideas.  In this post I show how you can get started with Tensorflow in both Python and R

 

Tensorflow in Python

For tensorflow in Python, I found Google’s Colab an ideal environment for running your Deep Learning code. This is an Google’s research project  where you can execute your code  on GPUs, TPUs etc

Tensorflow in R (RStudio)

To execute tensorflow in R (RStudio) you need to install tensorflow and keras as shown below
In this post I show how to get started with Tensorflow and Keras in R.

# Install Tensorflow in RStudio
#install_tensorflow()
# Install Keras
#install_packages("keras")
library(tensorflow)
libary(keras)

This post takes 3 different Machine Learning problems and uses the
Tensorflow/Keras framework to solve it

Note:
You can view the Google Colab notebook at Tensorflow in Python
The RMarkdown file has been published at RPubs and can be accessed
at Getting started with Tensorflow in R

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($14.99) and in kindle version($9.99/Rs449).

1. Multivariate regression with Tensorflow – Python

This code performs multivariate regression using Tensorflow and keras on the advent of Parkinson disease through sound recordings see Parkinson Speech Dataset with Multiple Types of Sound Recordings Data Set . The clinician’s motorUPDRS score has to be predicted from the set of features

In [0]:
# Import tensorflow
import tensorflow as tf
from tensorflow import keras
In [2]:
#Get the data rom the UCI Machine Learning repository
dataset = keras.utils.get_file("parkinsons_updrs.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data")
Downloading data from https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data
917504/911261 [==============================] - 0s 0us/step
In [3]:
# Read the CSV file 
import pandas as pd
parkinsons = pd.read_csv(dataset, na_values = "?", comment='\t',
                      sep=",", skipinitialspace=True)
print(parkinsons.shape)
print(parkinsons.columns)
#Check if there are any NAs in the rows
parkinsons.isna().sum()
(5875, 22)
Index(['subject#', 'age', 'sex', 'test_time', 'motor_UPDRS', 'total_UPDRS',
       'Jitter(%)', 'Jitter(Abs)', 'Jitter:RAP', 'Jitter:PPQ5', 'Jitter:DDP',
       'Shimmer', 'Shimmer(dB)', 'Shimmer:APQ3', 'Shimmer:APQ5',
       'Shimmer:APQ11', 'Shimmer:DDA', 'NHR', 'HNR', 'RPDE', 'DFA', 'PPE'],
      dtype='object')
Out[3]:
subject#         0
age              0
sex              0
test_time        0
motor_UPDRS      0
total_UPDRS      0
Jitter(%)        0
Jitter(Abs)      0
Jitter:RAP       0
Jitter:PPQ5      0
Jitter:DDP       0
Shimmer          0
Shimmer(dB)      0
Shimmer:APQ3     0
Shimmer:APQ5     0
Shimmer:APQ11    0
Shimmer:DDA      0
NHR              0
HNR              0
RPDE             0
DFA              0
PPE              0
dtype: int64
Note: To see how to create dummy variables see my post Practical Machine Learning with R and Python – Part 2
In [4]:
# Drop the columns subject number as it is not relevant
parkinsons1=parkinsons.drop(['subject#'],axis=1)

# Create dummy variables for sex (M/F)
parkinsons2=pd.get_dummies(parkinsons1,columns=['sex'])
parkinsons2.head()

Out[4]
age test_time motor_UPDRS total_UPDRS Jitter(%) Jitter(Abs) Jitter:RAP Jitter:PPQ5 Jitter:DDP Shimmer Shimmer(dB) Shimmer:APQ3 Shimmer:APQ5 Shimmer:APQ11 Shimmer:DDA NHR HNR RPDE DFA PPE sex_0 sex_1
0 72 5.6431 28.199 34.398 0.00662 0.000034 0.00401 0.00317 0.01204 0.02565 0.230 0.01438 0.01309 0.01662 0.04314 0.014290 21.640 0.41888 0.54842 0.16006 1 0
1 72 12.6660 28.447 34.894 0.00300 0.000017 0.00132 0.00150 0.00395 0.02024 0.179 0.00994 0.01072 0.01689 0.02982 0.011112 27.183 0.43493 0.56477 0.10810 1 0
2 72 19.6810 28.695 35.389 0.00481 0.000025 0.00205 0.00208 0.00616 0.01675 0.181 0.00734 0.00844 0.01458 0.02202 0.020220 23.047 0.46222 0.54405 0.21014 1 0
3 72 25.6470 28.905 35.810 0.00528 0.000027 0.00191 0.00264 0.00573 0.02309 0.327 0.01106 0.01265 0.01963 0.03317 0.027837 24.445 0.48730 0.57794 0.33277 1 0
4 72 33.6420 29.187 36.375 0.00335 0.000020 0.00093 0.00130 0.00278 0.01703 0.176 0.00679 0.00929 0.01819 0.02036 0.011625 26.126 0.47188 0.56122 0.19361 1 0

# Create a training and test data set with 80%/20%
train_dataset = parkinsons2.sample(frac=0.8,random_state=0)
test_dataset = parkinsons2.drop(train_dataset.index)

# Select columns
train_dataset1= train_dataset[['age', 'test_time', 'Jitter(%)', 'Jitter(Abs)',
       'Jitter:RAP', 'Jitter:PPQ5', 'Jitter:DDP', 'Shimmer', 'Shimmer(dB)',
       'Shimmer:APQ3', 'Shimmer:APQ5', 'Shimmer:APQ11', 'Shimmer:DDA', 'NHR',
       'HNR', 'RPDE', 'DFA', 'PPE', 'sex_0', 'sex_1']]
test_dataset1= test_dataset[['age','test_time', 'Jitter(%)', 'Jitter(Abs)',
       'Jitter:RAP', 'Jitter:PPQ5', 'Jitter:DDP', 'Shimmer', 'Shimmer(dB)',
       'Shimmer:APQ3', 'Shimmer:APQ5', 'Shimmer:APQ11', 'Shimmer:DDA', 'NHR',
       'HNR', 'RPDE', 'DFA', 'PPE', 'sex_0', 'sex_1']]
In [7]:
# Generate the statistics of the columns for use in normalization of the data
train_stats = train_dataset1.describe()
train_stats = train_stats.transpose()
train_stats
Out[7]:
count mean std min 25% 50% 75% max
age 4700.0 64.792766 8.870401 36.000000 58.000000 65.000000 72.000000 85.000000
test_time 4700.0 93.399490 53.630411 -4.262500 46.852250 93.405000 139.367500 215.490000
Jitter(%) 4700.0 0.006136 0.005612 0.000830 0.003560 0.004900 0.006770 0.099990
Jitter(Abs) 4700.0 0.000044 0.000036 0.000002 0.000022 0.000034 0.000053 0.000396
Jitter:RAP 4700.0 0.002969 0.003089 0.000330 0.001570 0.002235 0.003260 0.057540
Jitter:PPQ5 4700.0 0.003271 0.003760 0.000430 0.001810 0.002480 0.003460 0.069560
Jitter:DDP 4700.0 0.008908 0.009267 0.000980 0.004710 0.006705 0.009790 0.172630
Shimmer 4700.0 0.033992 0.025922 0.003060 0.019020 0.027385 0.039810 0.268630
Shimmer(dB) 4700.0 0.310487 0.231016 0.026000 0.175000 0.251000 0.363250 2.107000
Shimmer:APQ3 4700.0 0.017125 0.013275 0.001610 0.009190 0.013615 0.020562 0.162670
Shimmer:APQ5 4700.0 0.020151 0.016848 0.001940 0.010750 0.015785 0.023733 0.167020
Shimmer:APQ11 4700.0 0.027508 0.020270 0.002490 0.015630 0.022685 0.032713 0.275460
Shimmer:DDA 4700.0 0.051375 0.039826 0.004840 0.027567 0.040845 0.061683 0.488020
NHR 4700.0 0.032116 0.060206 0.000304 0.010827 0.018403 0.031452 0.748260
HNR 4700.0 21.704631 4.288853 1.659000 19.447750 21.973000 24.445250 37.187000
RPDE 4700.0 0.542549 0.100212 0.151020 0.471235 0.543490 0.614335 0.966080
DFA 4700.0 0.653015 0.070446 0.514040 0.596470 0.643285 0.710618 0.865600
PPE 4700.0 0.219559 0.091506 0.021983 0.156470 0.205340 0.264017 0.731730
sex_0 4700.0 0.681489 0.465948 0.000000 0.000000 1.000000 1.000000 1.000000
sex_1 4700.0 0.318511 0.465948 0.000000 0.000000 0.000000 1.000000 1.000000
In [0]:
# Create the target variable
train_labels = train_dataset.pop('motor_UPDRS')
test_labels = test_dataset.pop('motor_UPDRS')
In [0]:
# Normalize the data by subtracting the mean and dividing by the standard deviation
def normalize(x):
  return (x - train_stats['mean']) / train_stats['std']

# Create normalized training and test data
normalized_train_data = normalize(train_dataset1)
normalized_test_data = normalize(test_dataset1)
In [0]:
# Create a Deep Learning model with keras
model = tf.keras.Sequential([
    keras.layers.Dense(6, activation=tf.nn.relu, input_shape=[len(train_dataset1.keys())]),
    keras.layers.Dense(9, activation=tf.nn.relu),
    keras.layers.Dense(6,activation=tf.nn.relu),
    keras.layers.Dense(1)
  ])

# Use the Adam optimizer with a learning rate of 0.01
optimizer=keras.optimizers.Adam(lr=.01, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

# Set the metrics required to be Mean Absolute Error and Mean Squared Error.For regression, the loss is mean_squared_error
model.compile(loss='mean_squared_error',
                optimizer=optimizer,
                metrics=['mean_absolute_error', 'mean_squared_error'])
In [0]:
# Create a model
history=model.fit(
  normalized_train_data, train_labels,
  epochs=1000, validation_data = (normalized_test_data,test_labels), verbose=0)
In [26]:
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
Out[26]:
loss mean_absolute_error mean_squared_error val_loss val_mean_absolute_error val_mean_squared_error epoch
995 15.773989 2.936990 15.773988 16.980803 3.028168 16.980803 995
996 15.238623 2.873420 15.238622 17.458752 3.101033 17.458752 996
997 15.437594 2.895500 15.437593 16.926016 2.971508 16.926018 997
998 15.867891 2.943521 15.867892 16.950249 2.985036 16.950249 998
999 15.846878 2.938914 15.846880 17.095623 3.014504 17.095625 999
In [30]:
def plot_history(history):
  hist = pd.DataFrame(history.history)
  hist['epoch'] = history.epoch

  plt.figure()
  plt.xlabel('Epoch')
  plt.ylabel('Mean Abs Error')
  plt.plot(hist['epoch'], hist['mean_absolute_error'],
           label='Train Error')
  plt.plot(hist['epoch'], hist['val_mean_absolute_error'],
           label = 'Val Error')
  plt.ylim([2,5])
  plt.legend()

  plt.figure()
  plt.xlabel('Epoch')
  plt.ylabel('Mean Square Error ')
  plt.plot(hist['epoch'], hist['mean_squared_error'],
           label='Train Error')
  plt.plot(hist['epoch'], hist['val_mean_squared_error'],
           label = 'Val Error')
  plt.ylim([10,40])
  plt.legend()
  plt.show()


plot_history(history)

Observation

It can be seen that the mean absolute error is on an average about +/- 4.0. The validation error also is about the same. This can be reduced by playing around with the hyperparamaters and increasing the number of iterations

1a. Multivariate Regression in Tensorflow – R

# Install Tensorflow in RStudio
#install_tensorflow()
# Install Keras
#install_packages("keras")
library(tensorflow)
library(keras)
library(dplyr)
library(dummies)
## dummies-1.5.6 provided by Decision Patterns
library(tensorflow)
library(keras)

Multivariate regression

This code performs multivariate regression using Tensorflow and keras on the advent of Parkinson disease through sound recordings see Parkinson Speech Dataset with Multiple Types of Sound Recordings Data Set. The clinician’s motorUPDRS score has to be predicted from the set of features.

Read the data

# Download the Parkinson's data from UCI Machine Learning repository
dataset <- read.csv("https://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/telemonitoring/parkinsons_updrs.data")

# Set the column names
names(dataset) <- c("subject","age", "sex", "test_time","motor_UPDRS","total_UPDRS","Jitter","Jitter.Abs",
                 "Jitter.RAP","Jitter.PPQ5","Jitter.DDP","Shimmer", "Shimmer.dB", "Shimmer.APQ3",
                 "Shimmer.APQ5","Shimmer.APQ11","Shimmer.DDA", "NHR","HNR", "RPDE", "DFA","PPE")

# Remove the column 'subject' as it is not relevant to analysis
dataset1 <- subset(dataset, select = -c(subject))

# Make the column 'sex' as a factor for using dummies
dataset1$sex=as.factor(dataset1$sex)
# Add dummy variables for categorical cariable 'sex'
dataset2 <- dummy.data.frame(dataset1, sep = ".")
## Warning in model.matrix.default(~x - 1, model.frame(~x - 1), contrasts =
## FALSE): non-list contrasts argument ignored
dataset3 <- na.omit(dataset2)

Split the data as training and test in 80/20

## Split data 80% training and 20% test
sample_size <- floor(0.8 * nrow(dataset3))

## set the seed to make your partition reproducible
set.seed(12)
train_index <- sample(seq_len(nrow(dataset3)), size = sample_size)

train_dataset <- dataset3[train_index, ]
test_dataset <- dataset3[-train_index, ]

train_data <- train_dataset %>% select(sex.0,sex.1,age, test_time,Jitter,Jitter.Abs,Jitter.PPQ5,Jitter.DDP,
                              Shimmer, Shimmer.dB,Shimmer.APQ3,Shimmer.APQ11,
                              Shimmer.DDA,NHR,HNR,RPDE,DFA,PPE)

train_labels <- select(train_dataset,motor_UPDRS)
test_data <- test_dataset %>% select(sex.0,sex.1,age, test_time,Jitter,Jitter.Abs,Jitter.PPQ5,Jitter.DDP,
                              Shimmer, Shimmer.dB,Shimmer.APQ3,Shimmer.APQ11,
                              Shimmer.DDA,NHR,HNR,RPDE,DFA,PPE)
test_labels <- select(test_dataset,motor_UPDRS)

Normalize the data

 # Normalize the data by subtracting the mean and dividing by the standard deviation
normalize<-function(x) {
  y<-(x - mean(x)) / sd(x)
  return(y)
}

normalized_train_data <-apply(train_data,2,normalize)
# Convert to matrix
train_labels <- as.matrix(train_labels)
normalized_test_data <- apply(test_data,2,normalize)
test_labels <- as.matrix(test_labels)

Create the Deep Learning Model

model <- keras_model_sequential()
model %>% 
  layer_dense(units = 6, activation = 'relu', input_shape = dim(normalized_train_data)[2]) %>% 
  layer_dense(units = 9, activation = 'relu') %>%
  layer_dense(units = 6, activation = 'relu') %>%
  layer_dense(units = 1)

# Set the metrics required to be Mean Absolute Error and Mean Squared Error.For regression, the loss is 
# mean_squared_error
model %>% compile(
  loss = 'mean_squared_error',
  optimizer = optimizer_rmsprop(),
  metrics = c('mean_absolute_error','mean_squared_error')
)

# Fit the model
# Use the test data for validation
history <- model %>% fit(
  normalized_train_data, train_labels, 
  epochs = 30, batch_size = 128, 
  validation_data = list(normalized_test_data,test_labels)
)

Plot mean squared error, mean absolute error and loss for training data and test data

plot(history)

Fig1

2. Binary classification in Tensorflow – Python

This is a simple binary classification problem from UCI Machine Learning repository and deals with data on Breast cancer from the Univ. of Wisconsin Breast Cancer Wisconsin (Diagnostic) Data Set bold text

In [31]:
import tensorflow as tf
from tensorflow import keras
import pandas as pd
# Read the data set from UCI ML site
dataset_path = keras.utils.get_file("breast-cancer-wisconsin.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data")
raw_dataset = pd.read_csv(dataset_path, sep=",", na_values = "?", skipinitialspace=True,)
dataset = raw_dataset.copy()

#Check for Null and drop
dataset.isna().sum()
dataset = dataset.dropna()
dataset.isna().sum()

# Set the column names
dataset.columns = ["id","thickness",	"cellsize",	"cellshape","adhesion","epicellsize",
                    "barenuclei","chromatin","normalnucleoli","mitoses","class"]
dataset.head()
Downloading data from https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data
24576/19889 [=====================================] - 0s 1us/step
id	thickness	cellsize	cellshape	adhesion	epicellsize	barenuclei	chromatin	normalnucleoli	mitoses	class
0	1002945	5	4	4	5	7	10.0	3	2	1	2
1	1015425	3	1	1	1	2	2.0	3	1	1	2
2	1016277	6	8	8	1	3	4.0	3	7	1	2
3	1017023	4	1	1	3	2	1.0	3	1	1	2
4	1017122	8	10	10	8	7	10.0	9	7	1	4
# Create a training/test set in the ratio 80/20
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)

# Set the training and test set
train_dataset1= train_dataset[['thickness','cellsize','cellshape','adhesion',
                'epicellsize', 'barenuclei', 'chromatin', 'normalnucleoli','mitoses']]
test_dataset1=test_dataset[['thickness','cellsize','cellshape','adhesion',
                'epicellsize', 'barenuclei', 'chromatin', 'normalnucleoli','mitoses']]
In [34]:
# Generate the stats for each column to be used for normalization
train_stats = train_dataset1.describe()
train_stats = train_stats.transpose()
train_stats
Out[34]:
count mean std min 25% 50% 75% max
thickness 546.0 4.430403 2.812768 1.0 2.0 4.0 6.0 10.0
cellsize 546.0 3.179487 3.083668 1.0 1.0 1.0 5.0 10.0
cellshape 546.0 3.225275 3.005588 1.0 1.0 1.0 5.0 10.0
adhesion 546.0 2.921245 2.937144 1.0 1.0 1.0 4.0 10.0
epicellsize 546.0 3.261905 2.252643 1.0 2.0 2.0 4.0 10.0
barenuclei 546.0 3.560440 3.651946 1.0 1.0 1.0 7.0 10.0
chromatin 546.0 3.483516 2.492687 1.0 2.0 3.0 5.0 10.0
normalnucleoli 546.0 2.875458 3.064305 1.0 1.0 1.0 4.0 10.0
mitoses 546.0 1.609890 1.736762 1.0 1.0 1.0 1.0 10.0
In [0]:
# Create target variables
train_labels = train_dataset.pop('class')
test_labels = test_dataset.pop('class')
In [0]:
# Set the target variables as 0 or 1
train_labels[train_labels==2] =0 # benign
train_labels[train_labels==4] =1 # malignant

test_labels[test_labels==2] =0 # benign
test_labels[test_labels==4] =1 # malignant
In [0]:
# Normalize by subtracting mean and dividing by standard deviation
def normalize(x):
  return (x - train_stats['mean']) / train_stats['std']

# Convert columns to numeric
train_dataset1 = train_dataset1.apply(pd.to_numeric)
test_dataset1 = test_dataset1.apply(pd.to_numeric)

# Normalize
normalized_train_data = normalize(train_dataset1)
normalized_test_data = normalize(test_dataset1)
In [0]:
# Create a model
model = tf.keras.Sequential([
    keras.layers.Dense(6, activation=tf.nn.relu, input_shape=[len(train_dataset1.keys())]),
    keras.layers.Dense(9, activation=tf.nn.relu),
    keras.layers.Dense(6,activation=tf.nn.relu),
    keras.layers.Dense(1)
  ])

# Use the RMSProp optimizer
optimizer = tf.keras.optimizers.RMSprop(0.01)

# Since this is binary classification use binary_crossentropy
model.compile(loss='binary_crossentropy',
                optimizer=optimizer,
                metrics=['acc'])


# Fit a model
history=model.fit(
  normalized_train_data, train_labels,
  epochs=1000, validation_data=(normalized_test_data,test_labels), verbose=0)
In [55]:
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
loss acc val_loss val_acc epoch
995 0.112499 0.992674 0.454739 0.970588 995
996 0.112499 0.992674 0.454739 0.970588 996
997 0.112499 0.992674 0.454739 0.970588 997
998 0.112499 0.992674 0.454739 0.970588 998
999 0.112499 0.992674 0.454739 0.970588 999
In [58]:
# Plot training and test accuracy 
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.ylim([0.9,1])
plt.show()












# Plot training and test loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.ylim([0,0.5])
plt.show()


2a. Binary classification in Tensorflow -R

This is a simple binary classification problem from UCI Machine Learning repository and deals with data on Breast cancer from the Univ. of Wisconsin Breast Cancer Wisconsin (Diagnostic) Data Set

# Read the data for Breast cancer (Wisconsin)
dataset <- read.csv("https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data")

# Rename the columns
names(dataset) <- c("id","thickness",   "cellsize", "cellshape","adhesion","epicellsize",
                    "barenuclei","chromatin","normalnucleoli","mitoses","class")

# Remove the columns id and class
dataset1 <- subset(dataset, select = -c(id, class))
dataset2 <- na.omit(dataset1)

# Convert the column to numeric
dataset2$barenuclei <- as.numeric(dataset2$barenuclei)

Normalize the data

train_data <-apply(dataset2,2,normalize)
train_labels <- as.matrix(select(dataset,class))

# Set the target variables as 0 or 1 as it binary classification
train_labels[train_labels==2,]=0
train_labels[train_labels==4,]=1

Create the Deep Learning model

model <- keras_model_sequential()
model %>% 
  layer_dense(units = 6, activation = 'relu', input_shape = dim(train_data)[2]) %>% 
  layer_dense(units = 9, activation = 'relu') %>%
  layer_dense(units = 6, activation = 'relu') %>%
  layer_dense(units = 1)

# Since this is a binary classification we use binary cross entropy
model %>% compile(
  loss = 'binary_crossentropy',
  optimizer = optimizer_rmsprop(),
  metrics = c('accuracy')  # Metrics is accuracy
)

Fit the model. Use 20% of data for validation

history <- model %>% fit(
  train_data, train_labels, 
  epochs = 30, batch_size = 128, 
  validation_split = 0.2
)

Plot the accuracy and loss for training and validation data

plot(history)

3. MNIST in Tensorflow – Python

This takes the famous MNIST handwritten digits . It ca be seen that Tensorflow and Keras make short work of this famous problem of the late 1980s

# Download MNIST data
mnist=tf.keras.datasets.mnist
# Set training and test data and labels
(training_images,training_labels),(test_images,test_labels)=mnist.load_data()

print(training_images.shape)
print(test_images.shape)
(60000, 28, 28)
(10000, 28, 28)
In [61]:
# Plot a sample image from MNIST and show contents
import matplotlib.pyplot as plt
plt.imshow(training_images[1])
print(training_images[1])
[[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 51 159 253
159 50 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 48 238 252 252
252 237 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 54 227 253 252 239
233 252 57 6 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 10 60 224 252 253 252 202
84 252 253 122 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 163 252 252 252 253 252 252
96 189 253 167 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 51 238 253 253 190 114 253 228
47 79 255 168 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 48 238 252 252 179 12 75 121 21
0 0 253 243 50 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 38 165 253 233 208 84 0 0 0 0
0 0 253 252 165 0 0 0 0 0]
[ 0 0 0 0 0 0 0 7 178 252 240 71 19 28 0 0 0 0
0 0 253 252 195 0 0 0 0 0]
[ 0 0 0 0 0 0 0 57 252 252 63 0 0 0 0 0 0 0
0 0 253 252 195 0 0 0 0 0]
[ 0 0 0 0 0 0 0 198 253 190 0 0 0 0 0 0 0 0
0 0 255 253 196 0 0 0 0 0]
[ 0 0 0 0 0 0 76 246 252 112 0 0 0 0 0 0 0 0
0 0 253 252 148 0 0 0 0 0]
[ 0 0 0 0 0 0 85 252 230 25 0 0 0 0 0 0 0 0
7 135 253 186 12 0 0 0 0 0]
[ 0 0 0 0 0 0 85 252 223 0 0 0 0 0 0 0 0 7
131 252 225 71 0 0 0 0 0 0]
[ 0 0 0 0 0 0 85 252 145 0 0 0 0 0 0 0 48 165
252 173 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 86 253 225 0 0 0 0 0 0 114 238 253
162 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 85 252 249 146 48 29 85 178 225 253 223 167
56 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 85 252 252 252 229 215 252 252 252 196 130 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 28 199 252 252 253 252 252 233 145 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 25 128 252 253 252 141 37 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]]


# Normalize the images by dividing by 255.0
training_images = training_images/255.0
test_images = test_images/255.0

# Create a Sequential Keras model
model = tf.keras.models.Sequential([tf.keras.layers.Flatten(),
                                   tf.keras.layers.Dense(1024,activation=tf.nn.relu),
                                   tf.keras.layers.Dense(10,activation=tf.nn.softmax)])
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
In [68]:
history=model.fit(training_images,training_labels,validation_data=(test_images, test_labels), epochs=5, verbose=1)
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 17s 291us/sample - loss: 0.0020 - acc: 0.9999 - val_loss: 0.0719 - val_acc: 0.9810
Epoch 2/5
60000/60000 [==============================] - 17s 284us/sample - loss: 0.0021 - acc: 0.9998 - val_loss: 0.0705 - val_acc: 0.9821
Epoch 3/5
60000/60000 [==============================] - 17s 286us/sample - loss: 0.0017 - acc: 0.9999 - val_loss: 0.0729 - val_acc: 0.9805
Epoch 4/5
60000/60000 [==============================] - 17s 284us/sample - loss: 0.0014 - acc: 0.9999 - val_loss: 0.0762 - val_acc: 0.9804
Epoch 5/5
60000/60000 [==============================] - 17s 280us/sample - loss: 0.0015 - acc: 0.9999 - val_loss: 0.0735 - val_acc: 0.9812

Fig 1

Fig 2

 

 

 

 

 

 

 

 

MNIST in Tensorflow – R

The following code uses Tensorflow to learn MNIST’s handwritten digits ### Load MNIST data

mnist <- dataset_mnist()
x_train <- mnist$train$x
y_train <- mnist$train$y
x_test <- mnist$test$x
y_test <- mnist$test$y

Reshape and rescale

# Reshape the array
x_train <- array_reshape(x_train, c(nrow(x_train), 784))
x_test <- array_reshape(x_test, c(nrow(x_test), 784))
# Rescale
x_train <- x_train / 255
x_test <- x_test / 255

Convert out put to One Hot encoded format

y_train <- to_categorical(y_train, 10)
y_test <- to_categorical(y_test, 10)

Fit the model

Use the softmax activation for recognizing 10 digits and categorical cross entropy for loss

model <- keras_model_sequential() 
model %>% 
  layer_dense(units = 256, activation = 'relu', input_shape = c(784)) %>% 
  layer_dense(units = 128, activation = 'relu') %>%
  layer_dense(units = 10, activation = 'softmax') # Use softmax

model %>% compile(
  loss = 'categorical_crossentropy',
  optimizer = optimizer_rmsprop(),
  metrics = c('accuracy')
)

Fit the model

Note: A smaller number of epochs has been used. For better performance increase number of epochs

history <- model %>% fit(
  x_train, y_train, 
  epochs = 5, batch_size = 128, 
  validation_data = list(x_test,y_test)
)

Cricpy adds team analytics to its arsenal!!

I can’t sit still and see another man slaving and working. I want to get up and superintend, and walk round with my hands in my pockets, and tell him what to do. It is my energetic nature. I can’t help it.

It always does seem to me that I am doing more work than I should do. It is not that I object to the work, mind you; I like work: it fascinates me. I can sit and look at it for hours. I love to keep it by me: the idea of getting rid of it nearly breaks my heart.

Let your boat of life be light, packed with only what you need – a homely home and simple pleasures, one or two friends, worth the name, someone to love and someone to love you, a cat, a dog, and a pipe or two, enough to eat and enough to wear, and a little more than enough to drink; for thirst is a dangerous thing.

                Three Men in a boat by Jerome K Jerome
                

Introduction

Cricpy, the python avatar of my R package was born about a 9 months back see Introducing cricpy:A python package to analyze performances of cricketers. Cricpy, like its R twin, can analyze performance of batsmen & bowlers in Test, ODI and T20 formats. About a week and a half back, I added team analytics to my R package cricketr see Cricketr adds team analytics to its repertoire!!!. If cricketr has team analysis functions, then can cricpy be far behind? So, I have included the same 8 functions which can perform Team analytics into cricpy also. Team performance analysis can be done for Test, ODI and T20 matches.

This package uses the statistics info available in ESPN Cricinfo Statsguru. The current version of this package can handle all formats of the game including Test, ODI and Twenty20 cricket.

You should be able to install the package using pip install cricpy. Please be mindful of ESPN Cricinfo Terms of Use

There are 5 functions which are used internally 1) getTeamData b) getTeamNumber c) getMatchType d) getTeamDataHomeAway e) cleanTeamData

and the external functions which a) teamWinLossStatusVsOpposition b) teamWinLossStatusAtGrounds c) plotTimelineofhttps://drive.google.com/file/d/1l4nQsRZ0C2FyPosigZmo0t-kC2xZZ_wl/view?usp=sharingWinsLosses

All the above functions are common to Test, ODI and T20 teams

The data for a particular Team can be obtained with the getTeamDataHomeAway() function from the package. This will return a dataframe of the team’s win/loss status at home and away venues over a period of time. This can be saved as a CSV file. Once this is done, you can use this CSV file for all subsequent analysis

This post has been published at Rpubs at teamAnalyticsCricpy You can download the PDF version of this post at teamAnalyticsCricpy

As before you can get the help for any of the cricpy functions as below

import cricpy.analytics as ca
help(ca.teamWinLossStatusAtGrounds)
## Help on function teamWinLossStatusAtGrounds in module cricpy.analytics:
## 
## teamWinLossStatusAtGrounds(file, teamName, opposition=['all'], homeOrAway=['all'], matchType='Test', plot=False)
##     Compute the wins/losses/draw/tied etc for a Team in Test, ODI or T20 at venues
##     
##     Description
##     
##     This function computes the won,lost,draw,tied or no result for a team against other teams in home/away or neutral venues and either returns a dataframe or plots it for grounds
##     
##     Usage
##     
##     teamWinLossStatusAtGrounds(file,teamName,opposition=["all"],homeOrAway=["all"],
##                   matchType="Test",plot=FALSE)
##     Arguments
##     
##     file        
##     The CSV file for which the plot is required
##     teamName    
##     The name of the team for which plot is required
##     opposition  
##     Opposition is a vector namely ["all")] or ["Australia", "India", "England"]
##     homeOrAway  
##     This parameter is a vector which is either ["all")] or a vector of venues ["home","away","neutral"]
##     matchType   
##     Match type - Test, ODI or T20
##     plot        
##     If plot=FALSE then a data frame is returned, If plot=TRUE then a plot is generated
##     Value
##     
##     None
##     
##     Note
##     
##     Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
##     
##     Author(s)
##     
##     Tinniam V Ganesh
##     
##     References
##     
##     http://www.espncricinfo.com/ci/content/stats/index.html
##     https://gigadom.in/
##     See Also
##     
##     teamWinLossStatusVsOpposition teamWinLossStatusAtGrounds plotTimelineofWinsLosses
##     
##     Examples
##     
##     ## Not run: 
##     #Get the team data for India for Tests
##     
##     df =getTeamDataHomeAway(teamName="India",file="indiaOD.csv",matchType="ODI")
##     ca.teamWinLossStatusAtGrounds("india.csv",teamName="India",opposition=c("Australia","England","India"),
##                               homeOrAway=c("home","away"),plot=TRUE)
##     
##     ## End(Not run)

1. Get team data

1a. Test

The teams in Test cricket are included below

  1. Afghanistan 2.Bangladesh 3. England 4. World 5. India 6. Ireland 7. New Zealand 8. Pakistan 9. South Africa 10.Sri Lanka 11. West Indies 12.Zimbabwe

You can use this for the teamName paramater. This will return a dataframe and also save the file as a CSV , if save=True

Note: – Since I have already got the data as CSV files I am not executing the lines below

import cricpy.analytics as ca
# Get the data for the teams. Save as CSV
#indiaTest= ca.getTeamDataHomeAway(dir=".",teamView="bat",matchType="Test",file="indiaTest.csv",save=True,teamName="India")
#ca.getTeamDataHomeAway(teamName="South Africa", matchType="Test", file="southafricaTest.csv", save=True)
#ca.getTeamDataHomeAway(teamName="West Indies", matchType="Test", file="westindiesTest.csv", save=True)
#newzealandTest = ca.getTeamDataHomeAway(matchType="Test",file="newzealandTest.csv",save=True,teamName="New Zealand")

1b. ODI

The ODI teams in the world are below. The data for these teams can be got by names as shown below

  1. Afghanistan 2. Africa XI 3. Asia XI 4.Australia 5.Bangladesh
  2. Bermuda 7. England 8. ICC World X1 9. India 11.Ireland 12. New Zealand 13. Pakistan       14. South Africa 15.Sri Lanka 17. West Indies 18. Zimbabwe 19 Canada    21. East Africa        22. Hong Kong 23.Ireland 24. Kenya 25. Namibia 26.Nepal 27.Netherlands 28. Oman 29.Papua New Guinea 30. Scotland 31 United Arab Emirates 32. United States of America
import cricpy.analytics as ca
#indiaODI=  ca.getTeamDataHomeAway(dir=".",matchType="ODI",file="indiaODI.csv",save=True,teamName="India")
#englandODI =  ca.getTeamDataHomeAway(matchType="ODI",file="englandODI.csv",save=True,teamName="England")
#westindiesODI = ca.getTeamDataHomeAway(matchType="ODI",file="westindiesODI.csv",save=True,teamName="West Indies")
#irelandODI <- ca.getTeamDataHomeAway(matchType="ODI",file="irelandODI.csv",save=True,teamName="Ireland")

1c T20

The T20 teams in the world are

  1. Afghanistan 2. Australia 3. Bahrain 4. Bangladesh 5. Belgium 6. Belize
  2. Bermuda 8.Botswana 9. Canada 11. Costa Rica 12. Germany 13. Ghana
  3. Guernsey 15. Hong Kong 16. ICC World X1 17.India 18. Ireland 19.Italy
  4. Jersey 21. Kenya 22.Kuwait 23.Maldives 24.Malta 25.Mexico 26.Namibia
    27.Nepal 28.Netherlands 29. New Zealand 30.Nigeria 31.Oman 32. Pakistan
    33.Panama 34.Papua New Guinea 35. Philippines 36.Qatar 37.Saudi Arabia
    38.Scotland 39.South Africa 40.Spain 41.Sri Lanka 42.Uganda 43.United Arab Emirates United States of America 44.Vanuatu 45.West Indies
import cricpy.analytics as ca
#southafricaT20 = ca.getTeamDataHomeAway(matchType="T20",file="southafricaT20.csv",save=True,teamName="South Africa")
#srilankaT20 = ca.getTeamDataHomeAway(matchType="T20",file="srilankaT20.csv",save=True,teamName="Sri Lanka")
#canadaT20 = ca.getTeamDataHomeAway(matchType="T20",file="canadaT20.csv",save=True,teamName="Canada")
#afghanistanT20 = ca.getTeamDataHomeAway(matchType="T20",file="afghanistanT20.csv",save=True,teamName="Afghanistan")

2 Analysis of Test matches

The functions below perform analysis of Test teams

2a. Wins vs Loss against opposition

This function performs analysis of Test teams against other teams at home/away or neutral venue. Note:- The opposition can be a list of opposition teams. Similarly homeOrAway can also be a list of home/away/neutral venues.

import cricpy.analytics as ca
# Get the performance of Indian test team against all teams at all venues as a dataframe
df =ca.teamWinLossStatusVsOpposition("indiaTest.csv",teamName="India",opposition=["all"], homeOrAway=["all"], matchType="Test", plot=False)
print(df)
## ha                   away  home
## Opposition   Result            
## Afghanistan  won      0.0   1.0
## Australia    draw    20.0  23.0
##              lost    58.0  26.0
##              tied     0.0   2.0
##              won     13.0  39.0
## Bangladesh   draw     3.0   0.0
##              won      9.0   2.0
## England      draw    35.0  48.0
##              lost    68.0  26.0
##              won     13.0  33.0
## New Zealand  draw    18.0  28.0
##              lost    16.0   4.0
##              won     10.0  28.0
## Pakistan     draw    29.0  34.0
##              lost    14.0  10.0
##              won      2.0  13.0
## South Africa draw    13.0   3.0
##              lost    20.0  10.0
##              won      6.0  15.0
## Sri Lanka    draw    11.0  14.0
##              lost    14.0   0.0
##              won     16.0  13.0
## West Indies  draw    39.0  35.0
##              lost    32.0  28.0
##              won     13.0  21.0
## Zimbabwe     draw     1.0   1.0
##              lost     4.0   0.0
##              won      5.0   6.0
# Plot the performance of Indian Test team  against all teams at all venues
ca.teamWinLossStatusVsOpposition("indiaTest.csv",teamName="India",opposition=["all"],homeOrAway=["all"],matchType="Test",plot=True)















# Get the performance of Australia against India, England and New Zealand at all venues in Tests
df =ca.teamWinLossStatusVsOpposition("southafricaTest.csv",teamName="South Africa",opposition=["India","England","New Zealand"],homeOrAway=["all"],matchType="Test",plot=False)
print(df)

#Plot the performance of Australia against England, India and New Zealand only at home (Australia) 
## ha                  away  home
## Opposition  Result            
## England     draw      43    55
##             lost      60    62
##             won       26    34
## India       draw       5    14
##             lost      16     6
##             won        7    19
## New Zealand draw      20     7
##             lost       2     6
##             won       14    29
ca.teamWinLossStatusVsOpposition("southafricaTest.csv",teamName="South Africa",opposition=["India","England","New Zealand"],homeOrAway=["home","away"],matchType="Test",plot=True)

 

2b Wins vs losses of Test teams against opposition at different venues

import cricpy.analytics as ca
# Get the  performance of Pakistan against India, West Indies, South Africa at all venues in Tests and show performances at the venues
df = ca.teamWinLossStatusAtGrounds("westindiesTest.csv",teamName="West Indies",opposition=["India","Sri Lanka","South Africa"],homeOrAway=["all"],matchType="Test",plot=False)
print(df)

# Plot the performance of New Zealand Test team against England, Sri Lanka and Bangladesh at all grounds playes 
## ha                         away  home
## Ground             Result            
## Ahmedabad          won      2.0   0.0
## Basseterre         draw     0.0   3.0
## Bengaluru          draw     2.0   0.0
##                    won      2.0   0.0
## Bridgetown         draw     0.0   6.0
##                    lost     0.0   6.0
##                    won      0.0  14.0
## Cape Town          draw     2.0   0.0
##                    lost     6.0   0.0
## Centurion          lost     6.0   0.0
## Chennai            draw     4.0   0.0
##                    lost     8.0   0.0
##                    won      3.0   0.0
## Colombo (PSS)      lost     2.0   0.0
## Colombo (RPS)      draw     2.0   0.0
## Colombo (SSC)      lost     4.0   0.0
## Delhi              draw     6.0   0.0
##                    lost     2.0   0.0
##                    won      3.0   0.0
## Durban             lost     6.0   0.0
## Galle              draw     1.0   0.0
##                    lost     4.0   0.0
## Georgetown         draw     0.0  10.0
## Gros Islet         draw     0.0   5.0
##                    lost     0.0   2.0
## Hyderabad (Deccan) lost     2.0   0.0
## Johannesburg       lost     4.0   0.0
## Kandy              lost     4.0   0.0
## Kanpur             draw     1.0   0.0
##                    won      3.0   0.0
## Kingston           draw     0.0   8.0
##                    lost     0.0   4.0
##                    won      0.0  15.0
## Kingstown          draw     0.0   2.0
## Kolkata            draw     7.0   0.0
##                    lost     6.0   0.0
##                    won      3.0   0.0
## Mohali             won      2.0   0.0
## Moratuwa           draw     1.0   0.0
## Mumbai             draw     7.0   0.0
##                    lost     6.0   0.0
##                    won      2.0   0.0
## Mumbai (BS)        draw     5.0   0.0
##                    won      2.0   0.0
## Nagpur             draw     2.0   0.0
## North Sound        lost     0.0   2.0
## Pallekele          draw     1.0   0.0
## Port Elizabeth     draw     1.0   0.0
##                    lost     2.0   0.0
##                    won      2.0   0.0
## Port of Spain      draw     0.0  12.0
##                    lost     0.0  12.0
##                    won      0.0  10.0
## Providence         lost     0.0   2.0
## Rajkot             lost     2.0   0.0
## Roseau             draw     0.0   2.0
## St John's          draw     0.0   6.0
##                    lost     0.0   2.0
##                    won      0.0   2.0
ca. teamWinLossStatusAtGrounds("newzealandTest.csv",teamName="New Zealand",opposition=["England","Sri Lanka","Bangladesh"],homeOrAway=["all"],matchType="Test",plot=True)

 

2c. Plot the time line of wins vs losses of Test teams against opposition at different venues during an interval

import cricpy.analytics as ca
# Plot the time line of wins/losses of India against Australia, West Indies, South Africa in away/neutral venues
#from 2000-01-01 to 2017-01-01
ca.plotTimelineofWinsLosses("indiaTest.csv",teamName="India",opposition=["Australia","West Indies","South Africa"],
                         homeOrAway=["away","neutral"], startDate="2000-01-01",endDate="2017-01-01")
#Plot the time line of wins/losses of Indian Test team from 1970 onwards

ca.plotTimelineofWinsLosses("indiaTest.csv",teamName="India",startDate="1970-01-01",endDate="2017-01-01")

3 ODI

The functions below perform analysis of ODI teams listed above

3a. Wins vs Loss against opposition ODI teams

This function performs analysis of ODI teams against other teams at home/away or neutral venue. Note:- The opposition can be a vector of opposition teams. Similarly homeOrAway can also be a vector of home/away/neutral venues.

import cricpy.analytics as ca
# Get the performance of West Indies in ODIs against all other ODI teams at all venues and retirn as a dataframe
df = ca.teamWinLossStatusVsOpposition("westindiesODI.csv",teamName="West Indies",opposition=["all"],homeOrAway=["all"],matchType="ODI",plot=False)
print(df)

# Plot the performance of West Indies in ODIs against Sri Lanka, India at all venues
## ha                   away  home  neutral
## Opposition   Result                     
## Afghanistan  lost     0.0   1.0      2.0
##              won      0.0   1.0      0.0
## Australia    lost    41.0  25.0      8.0
##              n/r      3.0   0.0      0.0
##              tied     1.0   2.0      0.0
##              won     35.0  18.0      7.0
## Bangladesh   lost     6.0   5.0      3.0
##              n/r      1.0   0.0      1.0
##              won     10.0   8.0      3.0
## Bermuda      won      0.0   0.0      1.0
## Canada       won      2.0   1.0      1.0
## England      lost    22.0  17.0     12.0
##              n/r      0.0   3.0      0.0
##              won     15.0  23.0      6.0
## India        lost    27.0  14.0     18.0
##              n/r      0.0   1.0      0.0
##              tied     1.0   0.0      1.0
##              won     27.0  20.0     15.0
## Ireland      lost     0.0   0.0      1.0
##              won      2.0   3.0      2.0
## Kenya        lost     0.0   0.0      1.0
##              won      3.0   0.0      2.0
## Netherlands  won      0.0   0.0      2.0
## New Zealand  lost    19.0   5.0      3.0
##              n/r      2.0   0.0      2.0
##              won     10.0  15.0      5.0
## P.N.G.       won      0.0   0.0      1.0
## Pakistan     lost    11.0  15.0     34.0
##              tied     1.0   2.0      0.0
##              won     14.0  16.0     41.0
## Scotland     won      0.0   0.0      3.0
## South Africa lost    20.0  17.0      7.0
##              n/r      1.0   0.0      0.0
##              tied     0.0   0.0      1.0
##              won      5.0   7.0      3.0
## Sri Lanka    lost     9.0   5.0     11.0
##              n/r      2.0   1.0      0.0
##              won      3.0   5.0     20.0
## U.A.E.       won      0.0   0.0      2.0
## Zimbabwe     lost     4.0   1.0      5.0
##              n/r      0.0   1.0      0.0
##              tied     1.0   0.0      0.0
##              won      9.0  15.0     12.0
ca.teamWinLossStatusVsOpposition("westindiesODI.csv",teamName="West Indies",opposition=["Sri Lanka", "India"],homeOrAway=["all"],matchType="ODI",plot=True)















#Plot the performance of Ireland in ODIs against Zimbabwe, Kenya, bermuda, UAE, Oman and Scotland at all venues
ca.teamWinLossStatusVsOpposition("irelandODI.csv",teamName="Ireland",opposition=["Zimbabwe","Kenya","Bermuda","U.A.E.","Oman","Scotland"],homeOrAway=["all"],matchType="ODI",plot=True)

 

3b Wins vs losses of ODI teams against opposition at different venues

import cricpy.analytics as ca
# Plot the performance of England ODI team against Bangladesh, West Indies and Australia at all venues
ca.teamWinLossStatusAtGrounds("englandODI.csv",teamName="England",opposition=["West Indies"],homeOrAway=["all"],matchType="ODI",plot=True)
























#Plot the performance of India against South Africa, West Indies and Australia at 'home' venues
ca.teamWinLossStatusAtGrounds("indiaODI.csv",teamName="India",opposition=["South Africa"],homeOrAway=["home"],matchType="ODI",plot=True)

 

3c. Plot the time line of wins vs losses of ODI teams against opposition at different venues during an interval


import cricpy.analytics as ca
#Plot the time line of wins/losses of Bangladesh ODI team between 2015 and 2019 against all other teams and at
# all venues
ca.plotTimelineofWinsLosses("bangladeshOD.csv",teamName="Bangladesh",startDate="2015-01-01",endDate="2019-01-01",matchType="ODI")























#Plot the time line of wins/losses of India ODI against Sri Lanka, Bangladesh from 2016 to 2019
ca.plotTimelineofWinsLosses("indiaODI.csv",teamName="India",opposition=["Sri Lanka","Bangladesh"],startDate="2016-01-01",endDate="2019-01-01",matchType="ODI")

 

4 Twenty 20

The functions below perform analysis of Twenty 20 teams listed above

4a. Wins vs Loss against opposition ODI teams

This function performs analysis of T20 teams against other T20 teams at home/away or neutral venue. Note:- The opposition can be a list of opposition teams. Similarly homeOrAway can also be a list of home/away/neutral venues.

import cricpy.analytics as ca
# Get the performance of South Africa T20 team against England, India and Sri Lanka at home grounds at England
df = ca.teamWinLossStatusVsOpposition("southafricaT20.csv",teamName="South Africa",opposition=["England","India","Sri Lanka"], homeOrAway=["home"], matchType="T20", plot=False)
print(df)

#Plot the performance of South Africa T20 against England, India and Sri Lanka at all venues
## ha                 home
## Opposition Result      
## England    lost       1
##            won        4
## India      lost       5
##            won        2
## Sri Lanka  lost       2
##            tied       1
##            won        3
ca.teamWinLossStatusVsOpposition("southafricaT20.csv",teamName="South Africa", opposition=["England","India","Sri Lanka"],homeOrAway=["all"],matchType="T20",plot=True)

























#Plot the performance of Afghanistan T20 teams against all oppositions

 

ca.teamWinLossStatusVsOpposition("afghanistanT20.csv",teamName="Afghanistan",opposition=["all"],homeOrAway=["all"],matchType="T20",plot=True)

 

4b Wins vs losses of T20 teams against opposition at different venues

# Compute the performance of Canada against all opposition at all venues and show by grounds. Return as dataframe
df =ca.teamWinLossStatusAtGrounds("canadaT20.csv",teamName="Canada",opposition=["all"],homeOrAway=["all"],matchType="T20",plot=False)
print(df)

# Plot the performance of Sri Lanka T20 team against India and Bangladesh in different venues at home/away and neutral
## ha                     home  neutral
## Ground         Result               
## Abu Dhabi      lost     0.0      1.0
## Belfast        lost     0.0      1.0
##                won      0.0      2.0
## Colombo (SSC)  lost     0.0      1.0
##                won      0.0      1.0
## Dubai (DSC)    lost     0.0      5.0
## ICCA Dubai     lost     0.0      2.0
##                won      0.0      1.0
## King City (NW) lost     3.0      0.0
##                tied     1.0      0.0
## Sharjah        lost     0.0      1.0
ca.teamWinLossStatusAtGrounds("srilankaT20.csv",teamName="Sri Lanka",opposition=["India", "Bangladesh"], homeOrAway=["all"], matchType="T20", plot=True)

 

4c. Plot the time line of wins vs losses of T20 teams against opposition at different venues during an interval

#Plot the time line of Sri Lanka T20 team agaibst all opposition
ca.plotTimelineofWinsLosses("srilankaT20.csv",teamName="Sri Lanka",opposition=["Australia", "Pakistan"], startDate="2013-01-01", endDate="2019-01-01",  matchType="T20")





















# Plot the time line of South Africa T20 between 2010 and 2015 against West Indies and Pakistan
ca.plotTimelineofWinsLosses("southafricaT20.csv",teamName="South Africa",opposition=["West Indies", "Pakistan"], startDate="2010-01-01", endDate="2015-01-01",  matchType="T20")

Conclusion

With the above additional functions cricpy can now analyze batsmen, bowlers and teams in all formats of the game (Test, ODI and T20).

Have fun with cricpy!!!

You may also like

  1. My book ‘Deep Learning from first principles:Second Edition’ now on Amazon
  2. Practical Machine Learning with R and Python – Part 3
  3. Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR
  4. Revisiting World Bank data analysis with WDI and gVisMotionChart
  5. The Clash of the Titans in Test and ODI cricket
  6. Simulating the domino effect in Android using Box2D and AndEngine
  7. Presentation on Wireless Technologies – Part 1 8.De-blurring revisited with Wiener filter using OpenCV
  8. Cloud Computing – Design Considerations

To see all posts click Index of posts

Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR

“There’s something so paradoxical about pi. On the one hand, it represents order, as embodied by the shape of a circle, long held to be a symbol of perfection and eternity. On the other hand, pi is unruly, disheveled in appearance, its digits obeying no obvious rule, or at least none that we can perceive. Pi is elusive and mysterious, forever beyond reach. Its mix of order and disorder is what makes it so bewitching. ” 

From  Infinite Powers by Steven Strogatz

Anybody who wants to be “anybody” in Big Data must necessarily be able to work on both large structured and unstructured data.  Log analysis is critical in any enterprise which is usually unstructured. As I mentioned in my previous post Big Data: On RDDs, Dataframes,Hive QL with Pyspark and SparkR-Part 3 RDDs are typically used to handle unstructured data. Spark has the Dataframe abstraction over RDDs which performs better as it is optimized with the Catalyst optimization engine. Nevertheless, it is important to be able to process with RDDs.  This post is a continuation of my 3 earlier posts on Big Data namely

1. Big Data-1: Move into the big league:Graduate from Python to Pyspark
2. Big Data-2: Move into the big league:Graduate from R to SparkR
3. Big Data: On RDDs, Dataframes,Hive QL with Pyspark and SparkR-Part 3

This post uses publicly available Webserver logs from NASA. The logs are for the months Jul 95 and Aug 95 and are a good place to start unstructured text analysis/log analysis. I highly recommend parsing these publicly available logs with regular expressions. It is only when you do that the truth of Jamie Zawinski’s pearl of wisdom

“Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.” – Jamie Zawinksi

hits home. I spent many hours struggling with regex!!

For this post for the RDD part,  I had to refer to Dr. Fisseha Berhane’s blog post Webserver Log Analysis and for the Pyspark part, to the Univ. of California Specialization which I had done 3 years back Big Data Analysis with Apache Spark. Once I had played around with the regex for RDDs and PySpark I managed to get SparkR and SparklyR versions to work.

The notebooks used in this post have been published and are available at

  1. logsAnalysiswithRDDs
  2. logsAnalysiswithPyspark
  3. logsAnalysiswithSparkRandSparklyR

You can also download all the notebooks from Github at WebServerLogsAnalysis

An essential and unavoidable aspect of Big Data processing is the need to process unstructured text.Web server logs are one such area which requires Big Data techniques to process massive amounts of logs. The Common Log Format also known as the NCSA Common log format, is a standardized text file format used by web servers when generating server log files. Because the format is standardized, the files can be readily analyzed.

A publicly available webserver logs is the NASA-HTTP Web server logs. This is good dataset with which we can play around to get familiar to handling web server logs. The logs can be accessed at NASA-HTTP

Description These two traces contain two month’s worth of all HTTP requests to the NASA Kennedy Space Center WWW server in Florida.

Format The logs are an ASCII file with one line per request, with the following columns:

-host making the request. A hostname when possible, otherwise the Internet address if the name could not be looked up.

-timestamp in the format “DAY MON DD HH:MM:SS YYYY”, where DAY is the day of the week, MON is the name of the month, DD is the day of the month, HH:MM:SS is the time of day using a 24-hour clock, and YYYY is the year. The timezone is -0400.

-request given in quotes.

-HTTP reply code.

-bytes in the reply.

1 Parse Web server logs with RDDs

1.1 Read NASA Web server logs

Read the logs files from NASA for the months Jul 95 and Aug 95

from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext

conf = SparkConf().setAppName("Spark-Logs-Handling").setMaster("local[*]")
sc = SparkContext.getOrCreate(conf)

sqlcontext = SQLContext(sc)
rdd = sc.textFile("/FileStore/tables/NASA_access_log_*.gz")
rdd.count()
Out[1]: 3461613

1.2Check content

Check the logs to identify the parsing rules required for the logs

i=0
for line in rdd.sample(withReplacement = False, fraction = 0.00001, seed = 100).collect():
    i=i+1
    print(line)
    if i >5:
      break
ix-stp-fl2-19.ix.netcom.com – – [03/Aug/1995:23:03:09 -0400] “GET /images/faq.gif HTTP/1.0” 200 263
slip183-1.kw.jp.ibm.net – – [04/Aug/1995:18:42:17 -0400] “GET /shuttle/missions/sts-70/images/DSC-95EC-0001.gif HTTP/1.0” 200 107133
piweba4y.prodigy.com – – [05/Aug/1995:19:17:41 -0400] “GET /icons/menu.xbm HTTP/1.0” 200 527
ruperts.bt-sys.bt.co.uk – – [07/Aug/1995:04:44:10 -0400] “GET /shuttle/countdown/video/livevideo2.gif HTTP/1.0” 200 69067
dal06-04.ppp.iadfw.net – – [07/Aug/1995:21:10:19 -0400] “GET /images/NASA-logosmall.gif HTTP/1.0” 200 786
p15.ppp-1.directnet.com – – [10/Aug/1995:01:22:54 -0400] “GET /images/KSC-logosmall.gif HTTP/1.0” 200 1204

1.3 Write the parsing rule for each of the fields

  • host
  • timestamp
  • path
  • status
  • content_bytes

1.21 Get IP address/host name

This regex is at the start of the log and includes any non-white characted

import re
rslt=(rdd.map(lambda line: re.search('\S+',line)
   .group(0))
   .take(3)) # Get the IP address \host name
rslt
Out[3]: [‘in24.inetnebr.com’, ‘uplherc.upl.com’, ‘uplherc.upl.com’]

1.22 Get timestamp

Get the time stamp

rslt=(rdd.map(lambda line: re.search(‘(\S+ -\d{4})’,line)
    .groups())
    .take(3))  #Get the  date
rslt
[(‘[01/Aug/1995:00:00:01 -0400’,),
(‘[01/Aug/1995:00:00:07 -0400’,),
(‘[01/Aug/1995:00:00:08 -0400’,)]

1.23 HTTP request

Get the HTTP request sent to Web server \w+ {GET}

# Get the REST call with ” “
rslt=(rdd.map(lambda line: re.search('"\w+\s+([^\s]+)\s+HTTP.*"',line)
    .groups())
    .take(3)) # Get the REST call
rslt
[(‘/shuttle/missions/sts-68/news/sts-68-mcc-05.txt’,),
(‘/’,),
(‘/images/ksclogo-medium.gif’,)]

1.23Get HTTP response status

Get the HTTP response to the request

rslt=(rdd.map(lambda line: re.search('"\s(\d{3})',line)
    .groups())
    .take(3)) #Get the status
rslt
Out[6]: [(‘200’,), (‘304’,), (‘304’,)]

1.24 Get content size

Get the HTTP response in bytes

rslt=(rdd.map(lambda line: re.search(‘^.*\s(\d*)$’,line)
    .groups())
    .take(3)) # Get the content size
rslt
Out[7]: [(‘1839’,), (‘0’,), (‘0’,)]

1.24 Putting it all together

Now put all the individual pieces together into 1 big regular expression and assign to the groups

  1. Host 2. Timestamp 3. Path 4. Status 5. Content_size
rslt=(rdd.map(lambda line: re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3}\s(\d*)$)',line)
    .groups())
    .take(3))
rslt
[(‘in24.inetnebr.com’,
‘ -‘,
‘ ‘,
‘-‘,
‘[01/Aug/1995:00:00:01 -0400]’,
‘”GET /shuttle/missions/sts-68/news/sts-68-mcc-05.txt HTTP/1.0″‘,
‘/shuttle/missions/sts-68/news/sts-68-mcc-05.txt’,
‘200 1839’,
‘1839’),
(‘uplherc.upl.com’,
‘ -‘,
‘ ‘,
‘-‘,
‘[01/Aug/1995:00:00:07 -0400]’,
‘”GET / HTTP/1.0″‘,
‘/’,
‘304 0’,
‘0’),
(‘uplherc.upl.com’,
‘ -‘,
‘ ‘,
‘-‘,
‘[01/Aug/1995:00:00:08 -0400]’,
‘”GET /images/ksclogo-medium.gif HTTP/1.0″‘,
‘/images/ksclogo-medium.gif’,
‘304 0’,
‘0’)]

1.25 Add a log parsing function

import re
def parse_log1(line):
    match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3}\s(\d*)$)',line)
    if match is None:    
        return(line,0)
    else:
        return(line,1)

1.26 Check for parsing failure

Check how many lines successfully parsed with the parsing function

n_logs = rdd.count()
failed = rdd.map(lambda line: parse_log1(line)).filter(lambda line: line[1] == 0).count()
print('Out of a total of {} logs, {} failed to parse'.format(n_logs,failed))
# Get the failed records line[1] == 0
failed1=rdd.map(lambda line: parse_log1(line)).filter(lambda line: line[1]==0)
failed1.take(3)
Out of a total of 3461613 logs, 38768 failed to parse
Out[10]:
[(‘gw1.att.com – – [01/Aug/1995:00:03:53 -0400] “GET /shuttle/missions/sts-73/news HTTP/1.0” 302 -‘,
0),
(‘js002.cc.utsunomiya-u.ac.jp – – [01/Aug/1995:00:07:33 -0400] “GET /shuttle/resources/orbiters/discovery.gif HTTP/1.0” 404 -‘,
0),
(‘pipe1.nyc.pipeline.com – – [01/Aug/1995:00:12:37 -0400] “GET /history/apollo/apollo-13/apollo-13-patch-small.gif” 200 12859’,
0)]

1.26 The above rule is not enough to parse the logs

It can be seen that the single rule only parses part of the logs and we cannot group the regex separately. There is an error “AttributeError: ‘NoneType’ object has no attribute ‘group'” which shows up

#rdd.map(lambda line: re.search(‘^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s(“\w+\s+([^\s]+)\s+HTTP.*”)\s(\d{3}\s(\d*)$)’,line[0]).group()).take(4)

File “/databricks/spark/python/pyspark/util.py”, line 99, in wrapper
return f(*args, **kwargs)
File “<command-1348022240961444>”, line 1, in <lambda>
AttributeError: ‘NoneType’ object has no attribute ‘group’

at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:490)

1.27 Add rule for parsing failed records

One of the issues with the earlier rule is the content_size has “-” for some logs

import re
def parse_failed(line):
    match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3}\s-$)',line)
    if match is None:        
        return(line,0)
    else:
        return(line,1)

1.28 Parse records which fail

Parse the records that fails with the new rule

failed2=rdd.map(lambda line: parse_failed(line)).filter(lambda line: line[1]==1)
failed2.take(5)
Out[13]:
[(‘gw1.att.com – – [01/Aug/1995:00:03:53 -0400] “GET /shuttle/missions/sts-73/news HTTP/1.0” 302 -‘,
1),
(‘js002.cc.utsunomiya-u.ac.jp – – [01/Aug/1995:00:07:33 -0400] “GET /shuttle/resources/orbiters/discovery.gif HTTP/1.0” 404 -‘,
1),
(‘tia1.eskimo.com – – [01/Aug/1995:00:28:41 -0400] “GET /pub/winvn/release.txt HTTP/1.0” 404 -‘,
1),
(‘itws.info.eng.niigata-u.ac.jp – – [01/Aug/1995:00:38:01 -0400] “GET /ksc.html/facts/about_ksc.html HTTP/1.0” 403 -‘,
1),
(‘grimnet23.idirect.com – – [01/Aug/1995:00:50:12 -0400] “GET /www/software/winvn/winvn.html HTTP/1.0” 404 -‘,
1)]

1.28 Add both rules

Add both rules for parsing the log.

Note it can be shown that even with both rules all the logs are not parse.Further rules may need to be added

import re
def parse_log2(line):
    # Parse logs with the rule below
    match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3})\s(\d*)$',line)
    # If match failed then use the rule below
    if match is None:
        match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3}\s-$)',line)
    if match is None:
        return (line, 0) # Return 0 for failure
    else:
        return (line, 1) # Return 1 for success

1.29 Group the different regex to groups for handling

def map2groups(line):
    match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3})\s(\d*)$',line)
    if match is None:
        match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3})\s(-)$',line)    
    return(match.groups())

1.30 Parse the logs and map the groups

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])

parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))

2. Parse Web server logs with Pyspark

2.1Read data into a Pyspark dataframe

import os
logs_file_path="/FileStore/tables/" + os.path.join('NASA_access_log_*.gz')
from pyspark.sql.functions import split, regexp_extract
base_df = sqlContext.read.text(logs_file_path)
#base_df.show(truncate=False)
from pyspark.sql.functions import split, regexp_extract
split_df = base_df.select(regexp_extract('value', r'^([^\s]+\s)', 1).alias('host'),
                          regexp_extract('value', r'^.*\[(\d\d\/\w{3}\/\d{4}:\d{2}:\d{2}:\d{2} -\d{4})]', 1).alias('timestamp'),
                          regexp_extract('value', r'^.*"\w+\s+([^\s]+)\s+HTTP.*"', 1).alias('path'),
                          regexp_extract('value', r'^.*"\s+([^\s]+)', 1).cast('integer').alias('status'),
                          regexp_extract('value', r'^.*\s+(\d+)$', 1).cast('integer').alias('content_size'))
split_df.show(5,truncate=False)
+———————+————————–+———————————————–+——+————+
|host |timestamp |path |status|content_size|
+———————+————————–+———————————————–+——+————+
|199.72.81.55 |01/Jul/1995:00:00:01 -0400|/history/apollo/ |200 |6245 |
|unicomp6.unicomp.net |01/Jul/1995:00:00:06 -0400|/shuttle/countdown/ |200 |3985 |
|199.120.110.21 |01/Jul/1995:00:00:09 -0400|/shuttle/missions/sts-73/mission-sts-73.html |200 |4085 |
|burger.letters.com |01/Jul/1995:00:00:11 -0400|/shuttle/countdown/liftoff.html |304 |0 |
|199.120.110.21 |01/Jul/1995:00:00:11 -0400|/shuttle/missions/sts-73/sts-73-patch-small.gif|200 |4179 |
+———————+————————–+———————————————–+——+————+
only showing top 5 rows

2.2 Check data

bad_rows_df = split_df.filter(split_df[‘host’].isNull() |
                              split_df['timestamp'].isNull() |
                              split_df['path'].isNull() |
                              split_df['status'].isNull() |
                             split_df['content_size'].isNull())
bad_rows_df.count()
Out[20]: 33905

2.3Check no of rows which do not have digits

We have already seen that the content_type field has ‘-‘ instead of digits in RDDs

#bad_content_size_df = base_df.filter(~ base_df[‘value’].rlike(r’\d+$’))
bad_content_size_df.count()
Out[21]: 33905

2.4 Add ‘*’ to identify bad rows

To identify the rows that are bad, concatenate ‘*’ to the content_size field where the field does not have digits. It can be seen that the content_size has ‘-‘ instead of a valid number

from pyspark.sql.functions import lit, concat
bad_content_size_df.select(concat(bad_content_size_df['value'], lit('*'))).show(4,truncate=False)
+—————————————————————————————————————————————————+
|concat(value, *) |
+—————————————————————————————————————————————————+
|dd15-062.compuserve.com – – [01/Jul/1995:00:01:12 -0400] “GET /news/sci.space.shuttle/archive/sci-space-shuttle-22-apr-1995-40.txt HTTP/1.0” 404 -*|
|dynip42.efn.org – – [01/Jul/1995:00:02:14 -0400] “GET /software HTTP/1.0” 302 -* |
|ix-or10-06.ix.netcom.com – – [01/Jul/1995:00:02:40 -0400] “GET /software/winvn HTTP/1.0” 302 -* |
|ix-or10-06.ix.netcom.com – – [01/Jul/1995:00:03:24 -0400] “GET /software HTTP/1.0” 302 -* |
+—————————————————————————————————————————————————+

2.5 Fill NAs with 0s

# Replace all null content_size values with 0.

cleaned_df = split_df.na.fill({‘content_size’: 0})

3. Webserver  logs parsing with SparkR

library(SparkR)
library(stringr)
file_location = "/FileStore/tables/NASA_access_log_Jul95.gz"
file_location = "/FileStore/tables/NASA_access_log_Aug95.gz"
# Load the SparkR library


# Initiate a SparkR session
sparkR.session()
sc <- sparkR.session()
sqlContext <- sparkRSQL.init(sc)
df <- read.text(sqlContext,"/FileStore/tables/NASA_access_log_Jul95.gz")

#df=SparkR::select(df, "value")
#head(SparkR::collect(df))
#m=regexp_extract(df$value,'\\\\S+',1)

a=df %>% 
  withColumn('host', regexp_extract(df$value, '^(\\S+)', 1)) %>%
  withColumn('timestamp', regexp_extract(df$value, "((\\S+ -\\d{4}))", 2)) %>%
  withColumn('path', regexp_extract(df$value, '(\\"\\w+\\s+([^\\s]+)\\s+HTTP.*")', 2))  %>%
  withColumn('status', regexp_extract(df$value, '(^.*"\\s+([^\\s]+))', 2)) %>%
  withColumn('content_size', regexp_extract(df$value, '(^.*\\s+(\\d+)$)', 2))
#b=a%>% select(host,timestamp,path,status,content_type)
head(SparkR::collect(a),10)

1 199.72.81.55 – – [01/Jul/1995:00:00:01 -0400] “GET /history/apollo/ HTTP/1.0” 200 6245
2 unicomp6.unicomp.net – – [01/Jul/1995:00:00:06 -0400] “GET /shuttle/countdown/ HTTP/1.0” 200 3985
3 199.120.110.21 – – [01/Jul/1995:00:00:09 -0400] “GET /shuttle/missions/sts-73/mission-sts-73.html HTTP/1.0” 200 4085
4 burger.letters.com – – [01/Jul/1995:00:00:11 -0400] “GET /shuttle/countdown/liftoff.html HTTP/1.0” 304 0
5 199.120.110.21 – – [01/Jul/1995:00:00:11 -0400] “GET /shuttle/missions/sts-73/sts-73-patch-small.gif HTTP/1.0” 200 4179
6 burger.letters.com – – [01/Jul/1995:00:00:12 -0400] “GET /images/NASA-logosmall.gif HTTP/1.0” 304 0
7 burger.letters.com – – [01/Jul/1995:00:00:12 -0400] “GET /shuttle/countdown/video/livevideo.gif HTTP/1.0” 200 0
8 205.212.115.106 – – [01/Jul/1995:00:00:12 -0400] “GET /shuttle/countdown/countdown.html HTTP/1.0” 200 3985
9 d104.aa.net – – [01/Jul/1995:00:00:13 -0400] “GET /shuttle/countdown/ HTTP/1.0” 200 3985
10 129.94.144.152 – – [01/Jul/1995:00:00:13 -0400] “GET / HTTP/1.0” 200 7074
host timestamp
1 199.72.81.55 [01/Jul/1995:00:00:01 -0400
2 unicomp6.unicomp.net [01/Jul/1995:00:00:06 -0400
3 199.120.110.21 [01/Jul/1995:00:00:09 -0400
4 burger.letters.com [01/Jul/1995:00:00:11 -0400
5 199.120.110.21 [01/Jul/1995:00:00:11 -0400
6 burger.letters.com [01/Jul/1995:00:00:12 -0400
7 burger.letters.com [01/Jul/1995:00:00:12 -0400
8 205.212.115.106 [01/Jul/1995:00:00:12 -0400
9 d104.aa.net [01/Jul/1995:00:00:13 -0400
10 129.94.144.152 [01/Jul/1995:00:00:13 -0400
path status content_size
1 /history/apollo/ 200 6245
2 /shuttle/countdown/ 200 3985
3 /shuttle/missions/sts-73/mission-sts-73.html 200 4085
4 /shuttle/countdown/liftoff.html 304 0
5 /shuttle/missions/sts-73/sts-73-patch-small.gif 200 4179
6 /images/NASA-logosmall.gif 304 0
7 /shuttle/countdown/video/livevideo.gif 200 0
8 /shuttle/countdown/countdown.html 200 3985
9 /shuttle/countdown/ 200 3985
10 / 200 7074

4 Webserver logs parsing with SparklyR

install.packages("sparklyr")
library(sparklyr)
library(dplyr)
library(stringr)
#sc <- spark_connect(master = "local", version = "2.1.0")
sc <- spark_connect(method = "databricks")
sdf <-spark_read_text(sc, name="df", path = "/FileStore/tables/NASA_access_log*.gz")
sdf
Installing package into ‘/databricks/spark/R/lib’
# Source: spark [?? x 1]
   line                                                                         
                                                                           
 1 "199.72.81.55 - - [01/Jul/1995:00:00:01 -0400] \"GET /history/apollo/ HTTP/1…
 2 "unicomp6.unicomp.net - - [01/Jul/1995:00:00:06 -0400] \"GET /shuttle/countd…
 3 "199.120.110.21 - - [01/Jul/1995:00:00:09 -0400] \"GET /shuttle/missions/sts…
 4 "burger.letters.com - - [01/Jul/1995:00:00:11 -0400] \"GET /shuttle/countdow…
 5 "199.120.110.21 - - [01/Jul/1995:00:00:11 -0400] \"GET /shuttle/missions/sts…
 6 "burger.letters.com - - [01/Jul/1995:00:00:12 -0400] \"GET /images/NASA-logo…
 7 "burger.letters.com - - [01/Jul/1995:00:00:12 -0400] \"GET /shuttle/countdow…
 8 "205.212.115.106 - - [01/Jul/1995:00:00:12 -0400] \"GET /shuttle/countdown/c…
 9 "d104.aa.net - - [01/Jul/1995:00:00:13 -0400] \"GET /shuttle/countdown/ HTTP…
10 "129.94.144.152 - - [01/Jul/1995:00:00:13 -0400] \"GET / HTTP/1.0\" 200 7074"
# … with more rows
#install.packages(“sparklyr”)
library(sparklyr)
library(dplyr)
library(stringr)
#sc <- spark_connect(master = "local", version = "2.1.0")
sc <- spark_connect(method = "databricks")
sdf <-spark_read_text(sc, name="df", path = "/FileStore/tables/NASA_access_log*.gz")
sdf <- sdf %>% mutate(host = regexp_extract(line, '^(\\\\S+)',1)) %>% 
               mutate(timestamp = regexp_extract(line, '((\\\\S+ -\\\\d{4}))',2)) %>%
               mutate(path = regexp_extract(line, '(\\\\"\\\\w+\\\\s+([^\\\\s]+)\\\\s+HTTP.*")',2)) %>%
               mutate(status = regexp_extract(line, '(^.*"\\\\s+([^\\\\s]+))',2)) %>%
               mutate(content_size = regexp_extract(line, '(^.*\\\\s+(\\\\d+)$)',2))

5 Hosts

5.1  RDD

5.11 Parse and map to hosts to groups

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])
parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))

# Create tuples of (host,1) and apply reduceByKey() and order by descending
rslt=(parsed_rdd2.map(lambda x😦x[0],1))
                 .reduceByKey(lambda a,b:a+b)
                 .takeOrdered(10, lambda x: -x[1]))
rslt
Out[18]:
[(‘piweba3y.prodigy.com’, 21988),
(‘piweba4y.prodigy.com’, 16437),
(‘piweba1y.prodigy.com’, 12825),
(‘edams.ksc.nasa.gov’, 11962),
(‘163.206.89.4’, 9697),
(‘news.ti.com’, 8161),
(‘www-d1.proxy.aol.com’, 8047),
(‘alyssa.prodigy.com’, 8037),
(‘siltb10.orl.mmc.com’, 7573),
(‘www-a2.proxy.aol.com’, 7516)]

5.12Plot counts of hosts

import seaborn as sns

import pandas as pd import matplotlib.pyplot as plt df=pd.DataFrame(rslt,columns=[‘host’,‘count’]) sns.barplot(x=‘host’,y=‘count’,data=df) plt.subplots_adjust(bottom=0.6, right=0.8, top=0.9) plt.xticks(rotation=“vertical”,fontsize=8) display()

5.2 PySpark

5.21 Compute counts of hosts

df= (cleaned_df
     .groupBy('host')
     .count()
     .orderBy('count',ascending=False))
df.show(5)
+——————–+—–+
| host|count|
+——————–+—–+
|piweba3y.prodigy….|21988|
|piweba4y.prodigy….|16437|
|piweba1y.prodigy….|12825|
| edams.ksc.nasa.gov |11964|
| 163.206.89.4 | 9697|
+——————–+—–+
only showing top 5 rows

5.22 Plot count of hosts

import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
df1=df.toPandas()
df2 = df1.head(10)
df2.count()
sns.barplot(x='host',y='count',data=df2)
plt.subplots_adjust(bottom=0.5, right=0.8, top=0.9)
plt.xlabel("Hosts")
plt.ylabel('Count')
plt.xticks(rotation="vertical",fontsize=10)
display()

5.3 SparkR

5.31 Compute count of hosts

c <- SparkR::select(a,a$host)
df=SparkR::summarize(SparkR::groupBy(c, a$host), noHosts = count(a$host))
df1 =head(arrange(df,desc(df$noHosts)),10)
head(df1)
                  host noHosts
1 piweba3y.prodigy.com   17572
2 piweba4y.prodigy.com   11591
3 piweba1y.prodigy.com    9868
4   alyssa.prodigy.com    7852
5  siltb10.orl.mmc.com    7573
6 piweba2y.prodigy.com    5922

5.32 Plot count of hosts

library(ggplot2)
p <-ggplot(data=df1, aes(x=host, y=noHosts,fill=host)) +   geom_bar(stat="identity") + theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Host') + ylab('Count')
p

5.4 SparklyR

5.41 Compute count of Hosts

df <- sdf %>% select(host,timestamp,path,status,content_size)
df1 <- df %>% select(host) %>% group_by(host) %>% summarise(noHosts=n()) %>% arrange(desc(noHosts))
df2 <-head(df1,10)

5.42 Plot count of hosts

library(ggplot2)

p <-ggplot(data=df2, aes(x=host, y=noHosts,fill=host)) + geom_bar(stat=identity”)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab(Host’) + ylab(Count’)

p

6 Paths

6.1 RDD

6.11 Parse and map to hosts to groups

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])
parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))
rslt=(parsed_rdd2.map(lambda x😦x[5],1))
                 .reduceByKey(lambda a,b:a+b)
                 .takeOrdered(10, lambda x: -x[1]))
rslt
[(‘”GET /images/NASA-logosmall.gif HTTP/1.0″‘, 207520),
(‘”GET /images/KSC-logosmall.gif HTTP/1.0″‘, 164487),
(‘”GET /images/MOSAIC-logosmall.gif HTTP/1.0″‘, 126933),
(‘”GET /images/USA-logosmall.gif HTTP/1.0″‘, 126108),
(‘”GET /images/WORLD-logosmall.gif HTTP/1.0″‘, 124972),
(‘”GET /images/ksclogo-medium.gif HTTP/1.0″‘, 120704),
(‘”GET /ksc.html HTTP/1.0″‘, 83209),
(‘”GET /images/launch-logo.gif HTTP/1.0″‘, 75839),
(‘”GET /history/apollo/images/apollo-logo1.gif HTTP/1.0″‘, 68759),
(‘”GET /shuttle/countdown/ HTTP/1.0″‘, 64467)]

6.12 Plot counts of HTTP Requests

import seaborn as sns

df=pd.DataFrame(rslt,columns=[‘path’,‘count’]) sns.barplot(x=‘path’,y=‘count’,data=df) plt.subplots_adjust(bottom=0.7, right=0.8, top=0.9) plt.xticks(rotation=“vertical”,fontsize=8)

display()

6.2 Pyspark

6.21 Compute count of HTTP Requests

df= (cleaned_df
     .groupBy('path')
     .count()
     .orderBy('count',ascending=False))
df.show(5)
Out[20]:
+——————–+——+
| path| count|
+——————–+——+
|/images/NASA-logo…|208362|
|/images/KSC-logos…|164813|
|/images/MOSAIC-lo…|127656|
|/images/USA-logos…|126820|
|/images/WORLD-log…|125676|
+——————–+——+
only showing top 5 rows

6.22 Plot count of HTTP Requests

import matplotlib.pyplot as plt

import pandas as pd import seaborn as sns df1=df.toPandas() df2 = df1.head(10) df2.count() sns.barplot(x=‘path’,y=‘count’,data=df2)

plt.subplots_adjust(bottom=0.7, right=0.8, top=0.9) plt.xlabel(“HTTP Requests”) plt.ylabel(‘Count’) plt.xticks(rotation=90,fontsize=8)

display()

 

6.3 SparkR

6.31Compute count of HTTP requests

library(SparkR)
c <- SparkR::select(a,a$path)
df=SparkR::summarize(SparkR::groupBy(c, a$path), numRequest = count(a$path))
df1=head(df)

3.14 Plot count of HTTP Requests

library(ggplot2)
p <-ggplot(data=df1, aes(x=path, y=numRequest,fill=path)) +   geom_bar(stat="identity") + theme(axis.text.x = element_text(angle = 90, hjust = 1))+ xlab('Path') + ylab('Count')
p

6.4 SparklyR

6.41 Compute count of paths

df <- sdf %>% select(host,timestamp,path,status,content_size)
df1 <- df %>% select(path) %>% group_by(path) %>% summarise(noPaths=n()) %>% arrange(desc(noPaths))
df2 <-head(df1,10)
df2
# Source: spark [?? x 2]
# Ordered by: desc(noPaths)
   path                                    noPaths
                                        
 1 /images/NASA-logosmall.gif               208362
 2 /images/KSC-logosmall.gif                164813
 3 /images/MOSAIC-logosmall.gif             127656
 4 /images/USA-logosmall.gif                126820
 5 /images/WORLD-logosmall.gif              125676
 6 /images/ksclogo-medium.gif               121286
 7 /ksc.html                                 83685
 8 /images/launch-logo.gif                   75960
 9 /history/apollo/images/apollo-logo1.gif   68858
10 /shuttle/countdown/                       64695

6.42 Plot count of Paths

library(ggplot2)
p <-ggplot(data=df2, aes(x=path, y=noPaths,fill=path)) +   geom_bar(stat="identity")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Path') + ylab('Count')
p

7.1 RDD

7.11 Compute count of HTTP Status

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])

parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))
rslt=(parsed_rdd2.map(lambda x😦x[7],1))
                 .reduceByKey(lambda a,b:a+b)
                 .takeOrdered(10, lambda x: -x[1]))
rslt
Out[22]:
[(‘200’, 3095682),
(‘304’, 266764),
(‘302’, 72970),
(‘404’, 20625),
(‘403’, 225),
(‘500’, 65),
(‘501’, 41)]

1.37 Plot counts of HTTP response status’

import seaborn as sns

df=pd.DataFrame(rslt,columns=[‘status’,‘count’]) sns.barplot(x=‘status’,y=‘count’,data=df) plt.subplots_adjust(bottom=0.4, right=0.8, top=0.9) plt.xticks(rotation=“vertical”,fontsize=8)

display()

7.2 Pyspark

7.21 Compute count of HTTP status

status_count=(cleaned_df
                .groupBy('status')
                .count()
                .orderBy('count',ascending=False))
status_count.show()
+——+——-+
|status| count|
+——+——-+
| 200|3100522|
| 304| 266773|
| 302| 73070|
| 404| 20901|
| 403| 225|
| 500| 65|
| 501| 41|
| 400| 15|
| null| 1|

7.22 Plot count of HTTP status

Plot the HTTP return status vs the counts

df1=status_count.toPandas()

df2 = df1.head(10) df2.count() sns.barplot(x=‘status’,y=‘count’,data=df2) plt.subplots_adjust(bottom=0.5, right=0.8, top=0.9) plt.xlabel(“HTTP Status”) plt.ylabel(‘Count’) plt.xticks(rotation=“vertical”,fontsize=10) display()

7.3 SparkR

7.31 Compute count of HTTP Response status

library(SparkR)
c <- SparkR::select(a,a$status)
df=SparkR::summarize(SparkR::groupBy(c, a$status), numStatus = count(a$status))
df1=head(df)

3.16 Plot count of HTTP Response status

library(ggplot2)
p <-ggplot(data=df1, aes(x=status, y=numStatus,fill=status)) +   geom_bar(stat="identity") + theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Status') + ylab('Count')
p

7.4 SparklyR

7.41 Compute count of status

df <- sdf %>% select(host,timestamp,path,status,content_size)
df1 <- df %>% select(status) %>% group_by(status) %>% summarise(noStatus=n()) %>% arrange(desc(noStatus))
df2 <-head(df1,10)
df2
# Source: spark [?? x 2]
# Ordered by: desc(noStatus)
  status noStatus
       
1 200     3100522
2 304      266773
3 302       73070
4 404       20901
5 403         225
6 500          65
7 501          41
8 400          15
9 ""            1

7.42 Plot count of status

library(ggplot2)

p <-ggplot(data=df2, aes(x=status, y=noStatus,fill=status)) + geom_bar(stat=identity”)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab(Status’) + ylab(Count’) p

8.1 RDD

8.12 Compute count of content size

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])
parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))
rslt=(parsed_rdd2.map(lambda x😦x[8],1))
                 .reduceByKey(lambda a,b:a+b)
                 .takeOrdered(10, lambda x: -x[1]))
rslt
Out[24]:
[(‘0’, 280017),
(‘786’, 167281),
(‘1204’, 140505),
(‘363’, 111575),
(‘234’, 110824),
(‘669’, 110056),
(‘5866’, 107079),
(‘1713’, 66904),
(‘1173’, 63336),
(‘3635’, 55528)]

8.21 Plot content size

import seaborn as sns

df=pd.DataFrame(rslt,columns=[‘content_size’,‘count’]) sns.barplot(x=‘content_size’,y=‘count’,data=df) plt.subplots_adjust(bottom=0.4, right=0.8, top=0.9) plt.xticks(rotation=“vertical”,fontsize=8) display()

8.2 Pyspark

8.21 Compute count of content_size

size_counts=(cleaned_df
                .groupBy('content_size')
                .count()
                .orderBy('count',ascending=False))
size_counts.show(10)
+------------+------+
|content_size| count|
+------------+------+
|           0|313932|
|         786|167709|
|        1204|140668|
|         363|111835|
|         234|111086|
|         669|110313|
|        5866|107373|
|        1713| 66953|
|        1173| 63378|
|        3635| 55579|
+------------+------+
only showing top 10 rows

8.22 Plot counts of content size

Plot the path access versus the counts

df1=size_counts.toPandas()

df2 = df1.head(10) df2.count() sns.barplot(x=‘content_size’,y=‘count’,data=df2) plt.subplots_adjust(bottom=0.5, right=0.8, top=0.9) plt.xlabel(“content_size”) plt.ylabel(‘Count’) plt.xticks(rotation=“vertical”,fontsize=10) display()

8.3 SparkR

8.31 Compute count of content size

library(SparkR)
c <- SparkR::select(a,a$content_size)
df=SparkR::summarize(SparkR::groupBy(c, a$content_size), numContentSize = count(a$content_size))
df1=head(df)
df1
     content_size numContentSize
1        28426           1414
2        78382            293
3        60053              4
4        36067              2
5        13282            236
6        41785            174
8.32 Plot count of content sizes
library(ggplot2)

p <-ggplot(data=df1, aes(x=content_size, y=numContentSize,fill=content_size)) + geom_bar(stat=identity”) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab(Content Size’) + ylab(Count’)

p

8.4 SparklyR

8.41Compute count of content_size

df <- sdf %>% select(host,timestamp,path,status,content_size)
df1 <- df %>% select(content_size) %>% group_by(content_size) %>% summarise(noContentSize=n()) %>% arrange(desc(noContentSize))
df2 <-head(df1,10)
df2
# Source: spark [?? x 2]
# Ordered by: desc(noContentSize)
   content_size noContentSize
                   
 1 0                   280027
 2 786                 167709
 3 1204                140668
 4 363                 111835
 5 234                 111086
 6 669                 110313
 7 5866                107373
 8 1713                 66953
 9 1173                 63378
10 3635                 55579

8.42 Plot count of content_size

library(ggplot2)
p <-ggplot(data=df2, aes(x=content_size, y=noContentSize,fill=content_size)) +   geom_bar(stat="identity")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Content size') + ylab('Count')
p

Conclusion: I spent many,many hours struggling with Regex and getting RDDs,Pyspark to work. Also had to spend a lot of time trying to work out the syntax for SparkR and SparklyR for parsing. After you parse the logs plotting and analysis is a piece of cake! This is definitely worth a try!

Watch this space!!

Also see
1. Practical Machine Learning with R and Python – Part 3
2. Deep Learning from first principles in Python, R and Octave – Part 5
3. My book ‘Cricket analytics with cricketr and cricpy’ is now on Amazon
4. Latency, throughput implications for the Cloud
5. Modeling a Car in Android
6. Architecting a cloud based IP Multimedia System (IMS)
7. Dabbling with Wiener filter using OpenCV

To see all posts click Index of posts

Big Data: On RDDs, Dataframes,Hive QL with Pyspark and SparkR-Part 3

Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems. – Jamie Zawinski

Some programmers, when confronted with a problem, think “I know, I’ll use floating point arithmetic.” Now they have 1.999999999997 problems. – @tomscott

Some people, when confronted with a problem, think “I know, I’ll use multithreading”. Nothhw tpe yawrve o oblems. – @d6

Some people, when confronted with a problem, think “I know, I’ll use versioning.” Now they have 2.1.0 problems. – @JaesCoyle

Some people, when faced with a problem, think, “I know, I’ll use binary.” Now they have 10 problems. – @nedbat

Introduction

The power of Spark, which operates on in-memory datasets, is the fact that it stores the data as collections using Resilient Distributed Datasets (RDDs), which are themselves distributed in partitions across clusters. RDDs, are a fast way of processing data, as the data is operated on parallel based on the map-reduce paradigm. RDDs can be be used when the operations are low level. RDDs, are typically used on unstructured data like logs or text. For structured and semi-structured data, Spark has a higher abstraction called Dataframes. Handling data through dataframes are extremely fast as they are Optimized using the Catalyst Optimization engine and the performance is orders of magnitude faster than RDDs. In addition Dataframes also use Tungsten which handle memory management and garbage collection more effectively.

The picture below shows the performance improvement achieved with Dataframes over RDDs

Benefits from Project Tungsten

Npte: The above data and graph is taken from the course Big Data Analysis with Apache Spark at edX, UC Berkeley
This post is a continuation of my 2 earlier posts
1. Big Data-1: Move into the big league:Graduate from Python to Pyspark
2. Big Data-2: Move into the big league:Graduate from R to SparkR

In this post I perform equivalent operations on a small dataset using RDDs, Dataframes in Pyspark & SparkR and HiveQL. As in some of my earlier posts, I have used the tendulkar.csv file for this post. The dataset is small and allows me to do most everything from data cleaning, data transformation and grouping etc.
You can clone fork the notebooks from github at Big Data:Part 3

The notebooks have also been published and can be accessed below

  1. Big Data-1: On RDDs, DataFrames and HiveQL with Pyspark
  2. Big Data-2:On RDDs, Dataframes and HiveQL with SparkR

1. RDD – Select all columns of tables

from pyspark import SparkContext 
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
rdd.map(lambda line: (line.split(","))).take(5)
Out[90]: [[‘Runs’, ‘Mins’, ‘BF’, ‘4s’, ‘6s’, ‘SR’, ‘Pos’, ‘Dismissal’, ‘Inns’, ‘Opposition’, ‘Ground’, ‘Start Date’], [’15’, ’28’, ’24’, ‘2’, ‘0’, ‘62.5’, ‘6’, ‘bowled’, ‘2’, ‘v Pakistan’, ‘Karachi’, ’15-Nov-89′], [‘DNB’, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘4’, ‘v Pakistan’, ‘Karachi’, ’15-Nov-89′], [’59’, ‘254’, ‘172’, ‘4’, ‘0’, ‘34.3’, ‘6’, ‘lbw’, ‘1’, ‘v Pakistan’, ‘Faisalabad’, ’23-Nov-89′], [‘8′, ’24’, ’16’, ‘1’, ‘0’, ’50’, ‘6’, ‘run out’, ‘3’, ‘v Pakistan’, ‘Faisalabad’, ’23-Nov-89′]]

1b.RDD – Select columns 1 to 4

from pyspark import SparkContext 
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
rdd.map(lambda line: (line.split(",")[0:4])).take(5)
Out[91]:
[[‘Runs’, ‘Mins’, ‘BF’, ‘4s’],
[’15’, ’28’, ’24’, ‘2’],
[‘DNB’, ‘-‘, ‘-‘, ‘-‘],
[’59’, ‘254’, ‘172’, ‘4’],
[‘8′, ’24’, ’16’, ‘1’]]

1c. RDD – Select specific columns 0, 10

from pyspark import SparkContext 
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
df=rdd.map(lambda line: (line.split(",")))
df.map(lambda x: (x[10],x[0])).take(5)
Out[92]:
[(‘Ground’, ‘Runs’),
(‘Karachi’, ’15’),
(‘Karachi’, ‘DNB’),
(‘Faisalabad’, ’59’),
(‘Faisalabad’, ‘8’)]

2. Dataframe:Pyspark – Select all columns

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.show(5)
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
|Runs|Mins| BF| 4s| 6s| SR|Pos|Dismissal|Inns|Opposition| Ground|Start Date|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
| 15| 28| 24| 2| 0| 62.5| 6| bowled| 2|v Pakistan| Karachi| 15-Nov-89|
| DNB| -| -| -| -| -| -| -| 4|v Pakistan| Karachi| 15-Nov-89|
| 59| 254|172| 4| 0| 34.3| 6| lbw| 1|v Pakistan|Faisalabad| 23-Nov-89|
| 8| 24| 16| 1| 0| 50| 6| run out| 3|v Pakistan|Faisalabad| 23-Nov-89|
| 41| 124| 90| 5| 0|45.55| 7| bowled| 1|v Pakistan| Lahore| 1-Dec-89|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
only showing top 5 rows

2a. Dataframe:Pyspark- Select specific columns

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.select("Runs","BF","Mins").show(5)
+—-+—+—-+
|Runs| BF|Mins|
+—-+—+—-+
| 15| 24| 28|
| DNB| -| -|
| 59|172| 254|
| 8| 16| 24|
| 41| 90| 124|
+—-+—+—-+

3. Dataframe:SparkR – Select all columns

# Load the SparkR library
library(SparkR)
# Initiate a SparkR session
sparkR.session()
tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

# Check the dimensions of the dataframe
df=SparkR::select(tendulkar1,"*")
head(SparkR::collect(df))

  Runs Mins  BF 4s 6s    SR Pos Dismissal Inns Opposition     Ground Start Date
1   15   28  24  2  0  62.5   6    bowled    2 v Pakistan    Karachi  15-Nov-89
2  DNB    -   -  -  -     -   -         -    4 v Pakistan    Karachi  15-Nov-89
3   59  254 172  4  0  34.3   6       lbw    1 v Pakistan Faisalabad  23-Nov-89
4    8   24  16  1  0    50   6   run out    3 v Pakistan Faisalabad  23-Nov-89
5   41  124  90  5  0 45.55   7    bowled    1 v Pakistan     Lahore   1-Dec-89
6   35   74  51  5  0 68.62   6       lbw    1 v Pakistan    Sialkot   9-Dec-89

3a. Dataframe:SparkR- Select specific columns

# Load the SparkR library
library(SparkR)
# Initiate a SparkR session
sparkR.session()
tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

# Check the dimensions of the dataframe
df=SparkR::select(tendulkar1, "Runs", "BF","Mins")
head(SparkR::collect(df))
Runs BF Mins
1 15 24 28
2 DNB – –
3 59 172 254
4 8 16 24
5 41 90 124
6 35 51 74

4. Hive QL – Select all columns

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.createOrReplaceTempView('tendulkar1_table')
spark.sql('select  * from tendulkar1_table limit 5').show(10, truncate = False)
+—-+—+—-++—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
|Runs|Mins|BF |4s |6s |SR |Pos|Dismissal|Inns|Opposition|Ground |Start Date|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
|15 |28 |24 |2 |0 |62.5 |6 |bowled |2 |v Pakistan|Karachi |15-Nov-89 |
|DNB |- |- |- |- |- |- |- |4 |v Pakistan|Karachi |15-Nov-89 |
|59 |254 |172|4 |0 |34.3 |6 |lbw |1 |v Pakistan|Faisalabad|23-Nov-89 |
|8 |24 |16 |1 |0 |50 |6 |run out |3 |v Pakistan|Faisalabad|23-Nov-89 |
|41 |124 |90 |5 |0 |45.55|7 |bowled |1 |v Pakistan|Lahore |1-Dec-89 |
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+

4a. Hive QL – Select specific columns

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.createOrReplaceTempView('tendulkar1_table')
spark.sql('select  Runs, BF,Mins from tendulkar1_table limit 5').show(10, truncate = False)
+—-+—+—-+
|Runs|BF |Mins|
+—-+—+—-+
|15 |24 |28 |
|DNB |- |- |
|59 |172|254 |
|8 |16 |24 |
|41 |90 |124 |
+—-+—+—-+

5. RDD – Filter rows on specific condition

from pyspark import SparkContext
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
df=(rdd.map(lambda line: line.split(",")[:])
      .filter(lambda x: x !="DNB")
      .filter(lambda x: x!= "TDNB")
      .filter(lambda x: x!="absent")
      .map(lambda x: [x[0].replace("*","")] + x[1:]))

df.take(5)

Out[97]:
[[‘Runs’,
‘Mins’,
‘BF’,
‘4s’,
‘6s’,
‘SR’,
‘Pos’,
‘Dismissal’,
‘Inns’,
‘Opposition’,
‘Ground’,
‘Start Date’],
[’15’,
’28’,
’24’,
‘2’,
‘0’,
‘62.5’,
‘6’,
‘bowled’,
‘2’,
‘v Pakistan’,
‘Karachi’,
’15-Nov-89′],
[‘DNB’,
‘-‘,
‘-‘,
‘-‘,
‘-‘,
‘-‘,
‘-‘,
‘-‘,
‘4’,
‘v Pakistan’,
‘Karachi’,
’15-Nov-89′],
[’59’,
‘254’,
‘172’,
‘4’,
‘0’,
‘34.3’,
‘6’,
‘lbw’,
‘1’,
‘v Pakistan’,
‘Faisalabad’,
’23-Nov-89′],
[‘8′,
’24’,
’16’,
‘1’,
‘0’,
’50’,
‘6’,
‘run out’,
‘3’,
‘v Pakistan’,
‘Faisalabad’,
’23-Nov-89′]]

5a. Dataframe:Pyspark – Filter rows on specific condition

from pyspark.sql import SparkSession
from pyspark.sql.functions import regexp_replace
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'DNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'TDNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'absent')
tendulkar1 = tendulkar1.withColumn('Runs', regexp_replace('Runs', '[*]', ''))
tendulkar1.show(5)
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
|Runs|Mins| BF| 4s| 6s| SR|Pos|Dismissal|Inns|Opposition| Ground|Start Date|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
| 15| 28| 24| 2| 0| 62.5| 6| bowled| 2|v Pakistan| Karachi| 15-Nov-89|
| 59| 254|172| 4| 0| 34.3| 6| lbw| 1|v Pakistan|Faisalabad| 23-Nov-89|
| 8| 24| 16| 1| 0| 50| 6| run out| 3|v Pakistan|Faisalabad| 23-Nov-89|
| 41| 124| 90| 5| 0|45.55| 7| bowled| 1|v Pakistan| Lahore| 1-Dec-89|
| 35| 74| 51| 5| 0|68.62| 6| lbw| 1|v Pakistan| Sialkot| 9-Dec-89|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
only showing top 5 rows

5b. Dataframe:SparkR – Filter rows on specific condition

sparkR.session()

tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

print(dim(tendulkar1))
tendulkar1 <-SparkR::filter(tendulkar1,tendulkar1$Runs != "DNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "TDNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "absent")
print(dim(tendulkar1))

# Cast the string type Runs to double
withColumn(tendulkar1, "Runs", cast(tendulkar1$Runs, "double"))
head(SparkR::distinct(tendulkar1[,"Runs"]),20)
# Remove the "* indicating not out
tendulkar1$Runs=SparkR::regexp_replace(tendulkar1$Runs, "\\*", "")
df=SparkR::select(tendulkar1,"*")
head(SparkR::collect(df))

5c Hive QL – Filter rows on specific condition

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.createOrReplaceTempView('tendulkar1_table')
spark.sql('select  Runs, BF,Mins from tendulkar1_table where Runs NOT IN  ("DNB","TDNB","absent")').show(10, truncate = False)
+—-+—+—-+
|Runs|BF |Mins|
+—-+—+—-+
|15 |24 |28 |
|59 |172|254 |
|8 |16 |24 |
|41 |90 |124 |
|35 |51 |74 |
|57 |134|193 |
|0 |1 |1 |
|24 |44 |50 |
|88 |266|324 |
|5 |13 |15 |
+—-+—+—-+
only showing top 10 rows

6. RDD – Find rows where Runs > 50

from pyspark import SparkContext
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
df=rdd.map(lambda line: (line.split(",")))
df=rdd.map(lambda line: line.split(",")[0:4]) \
   .filter(lambda x: x[0] not in ["DNB", "TDNB", "absent"])
df1=df.map(lambda x: [x[0].replace("*","")] + x[1:4])
header=df1.first()
df2=df1.filter(lambda x: x !=header)
df3=df2.map(lambda x: [float(x[0])] +x[1:4])
df3.filter(lambda x: x[0]>=50).take(10)
Out[101]: 
[[59.0, '254', '172', '4'],
 [57.0, '193', '134', '6'],
 [88.0, '324', '266', '5'],
 [68.0, '216', '136', '8'],
 [119.0, '225', '189', '17'],
 [148.0, '298', '213', '14'],
 [114.0, '228', '161', '16'],
 [111.0, '373', '270', '19'],
 [73.0, '272', '208', '8'],
 [50.0, '158', '118', '6']]

6a. Dataframe:Pyspark – Find rows where Runs >50

from pyspark.sql import SparkSession

from pyspark.sql.functions import regexp_replace
from pyspark.sql.types import IntegerType
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'DNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'TDNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'absent')
tendulkar1 = tendulkar1.withColumn("Runs", tendulkar1["Runs"].cast(IntegerType()))
tendulkar1.filter(tendulkar1['Runs']>=50).show(10)
+—-+—-+—+—+—+—–+—+———+—-+————–+————+———-+
|Runs|Mins| BF| 4s| 6s| SR|Pos|Dismissal|Inns| Opposition| Ground|Start Date|
+—-+—-+—+—+—+—–+—+———+—-+————–+————+———-+
| 59| 254|172| 4| 0| 34.3| 6| lbw| 1| v Pakistan| Faisalabad| 23-Nov-89|
| 57| 193|134| 6| 0|42.53| 6| caught| 3| v Pakistan| Sialkot| 9-Dec-89|
| 88| 324|266| 5| 0|33.08| 6| caught| 1| v New Zealand| Napier| 9-Feb-90|
| 68| 216|136| 8| 0| 50| 6| caught| 2| v England| Manchester| 9-Aug-90|
| 114| 228|161| 16| 0| 70.8| 4| caught| 2| v Australia| Perth| 1-Feb-92|
| 111| 373|270| 19| 0|41.11| 4| caught| 2|v South Africa|Johannesburg| 26-Nov-92|
| 73| 272|208| 8| 1|35.09| 5| caught| 2|v South Africa| Cape Town| 2-Jan-93|
| 50| 158|118| 6| 0|42.37| 4| caught| 1| v England| Kolkata| 29-Jan-93|
| 165| 361|296| 24| 1|55.74| 4| caught| 1| v England| Chennai| 11-Feb-93|
| 78| 285|213| 10| 0|36.61| 4| lbw| 2| v England| Mumbai| 19-Feb-93|
+—-+—-+—+—+—+—–+—+———+—-+————–+————+———-+

6b. Dataframe:SparkR – Find rows where Runs >50

# Load the SparkR library
library(SparkR)
sparkR.session()

tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

print(dim(tendulkar1))
tendulkar1 <-SparkR::filter(tendulkar1,tendulkar1$Runs != "DNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "TDNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "absent")
print(dim(tendulkar1))

# Cast the string type Runs to double
withColumn(tendulkar1, "Runs", cast(tendulkar1$Runs, "double"))
head(SparkR::distinct(tendulkar1[,"Runs"]),20)
# Remove the "* indicating not out
tendulkar1$Runs=SparkR::regexp_replace(tendulkar1$Runs, "\\*", "")
df=SparkR::select(tendulkar1,"*")
df=SparkR::filter(tendulkar1, tendulkar1$Runs > 50)
head(SparkR::collect(df))
  Runs Mins  BF 4s 6s    SR Pos Dismissal Inns    Opposition     Ground
1   59  254 172  4  0  34.3   6       lbw    1    v Pakistan Faisalabad
2   57  193 134  6  0 42.53   6    caught    3    v Pakistan    Sialkot
3   88  324 266  5  0 33.08   6    caught    1 v New Zealand     Napier
4   68  216 136  8  0    50   6    caught    2     v England Manchester
5  119  225 189 17  0 62.96   6   not out    4     v England Manchester
6  148  298 213 14  0 69.48   6   not out    2   v Australia     Sydney
  Start Date
1  23-Nov-89
2   9-Dec-89
3   9-Feb-90
4   9-Aug-90
5   9-Aug-90
6   2-Jan-92

 

7 RDD – groupByKey() and reduceByKey()

from pyspark import SparkContext
from pyspark.mllib.stat import Statistics
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
df=rdd.map(lambda line: (line.split(",")))
df=rdd.map(lambda line: line.split(",")[0:]) \
   .filter(lambda x: x[0] not in ["DNB", "TDNB", "absent"])
df1=df.map(lambda x: [x[0].replace("*","")] + x[1:])
header=df1.first()
df2=df1.filter(lambda x: x !=header)
df3=df2.map(lambda x: [float(x[0])] +x[1:])
df4 = df3.map(lambda x: (x[10],x[0]))
df5=df4.reduceByKey(lambda a,b: a+b,1)
df4.groupByKey().mapValues(lambda x: sum(x) / len(x)).take(10)

[(‘Georgetown’, 81.0),
(‘Lahore’, 17.0),
(‘Adelaide’, 32.6),
(‘Colombo (SSC)’, 77.55555555555556),
(‘Nagpur’, 64.66666666666667),
(‘Auckland’, 5.0),
(‘Bloemfontein’, 85.0),
(‘Centurion’, 73.5),
(‘Faisalabad’, 27.0),
(‘Bridgetown’, 26.0)]

7a Dataframe:Pyspark – Compute mean, min and max

from pyspark.sql.functions import *
tendulkar1= (sqlContext
         .read.format("com.databricks.spark.csv")
         .options(delimiter=',', header='true', inferschema='true')
         .load("/FileStore/tables/tendulkar.csv"))
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'DNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'TDNB')
tendulkar1 = tendulkar1.withColumn('Runs', regexp_replace('Runs', '[*]', ''))
tendulkar1.select('Runs').rdd.distinct().collect()

from pyspark.sql import functions as F
df=tendulkar1[['Runs','BF','Ground']].groupby(tendulkar1['Ground']).agg(F.mean(tendulkar1['Runs']),F.min(tendulkar1['Runs']),F.max(tendulkar1['Runs']))
df.show()
————-+—————–+———+———+
| Ground| avg(Runs)|min(Runs)|max(Runs)|
+————-+—————–+———+———+
| Bangalore| 54.3125| 0| 96|
| Adelaide| 32.6| 0| 61|
|Colombo (PSS)| 37.2| 14| 71|
| Christchurch| 12.0| 0| 24|
| Auckland| 5.0| 5| 5|
| Chennai| 60.625| 0| 81|
| Centurion| 73.5| 111| 36|
| Brisbane|7.666666666666667| 0| 7|
| Birmingham| 46.75| 1| 40|
| Ahmedabad| 40.125| 100| 8|
|Colombo (RPS)| 143.0| 143| 143|
| Chittagong| 57.8| 101| 36|
| Cape Town|69.85714285714286| 14| 9|
| Bridgetown| 26.0| 0| 92|
| Bulawayo| 55.0| 36| 74|
| Delhi|39.94736842105263| 0| 76|
| Chandigarh| 11.0| 11| 11|
| Bloemfontein| 85.0| 15| 155|
|Colombo (SSC)|77.55555555555556| 104| 8|
| Cuttack| 2.0| 2| 2|
+————-+—————–+———+———+
only showing top 20 rows

7b Dataframe:SparkR – Compute mean, min and max

sparkR.session()

tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

print(dim(tendulkar1))
tendulkar1 <-SparkR::filter(tendulkar1,tendulkar1$Runs != "DNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "TDNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "absent")
print(dim(tendulkar1))

# Cast the string type Runs to double
withColumn(tendulkar1, "Runs", cast(tendulkar1$Runs, "double"))
head(SparkR::distinct(tendulkar1[,"Runs"]),20)
# Remove the "* indicating not out
tendulkar1$Runs=SparkR::regexp_replace(tendulkar1$Runs, "\\*", "")
head(SparkR::distinct(tendulkar1[,"Runs"]),20)
df=SparkR::summarize(SparkR::groupBy(tendulkar1, tendulkar1$Ground), mean = mean(tendulkar1$Runs), minRuns=min(tendulkar1$Runs),maxRuns=max(tendulkar1$Runs))
head(df,20)
          Ground       mean minRuns maxRuns
1      Bangalore  54.312500       0      96
2       Adelaide  32.600000       0      61
3  Colombo (PSS)  37.200000      14      71
4   Christchurch  12.000000       0      24
5       Auckland   5.000000       5       5
6        Chennai  60.625000       0      81
7      Centurion  73.500000     111      36
8       Brisbane   7.666667       0       7
9     Birmingham  46.750000       1      40
10     Ahmedabad  40.125000     100       8
11 Colombo (RPS) 143.000000     143     143
12    Chittagong  57.800000     101      36
13     Cape Town  69.857143      14       9
14    Bridgetown  26.000000       0      92
15      Bulawayo  55.000000      36      74
16         Delhi  39.947368       0      76
17    Chandigarh  11.000000      11      11
18  Bloemfontein  85.000000      15     155
19 Colombo (SSC)  77.555556     104       8
20       Cuttack   2.000000       2       2

Also see
1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
2.My book ‘Deep Learning from first principles:Second Edition’ now on Amazon
3.The Clash of the Titans in Test and ODI cricket
4. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
5.Latency, throughput implications for the Cloud
6. Simulating a Web Joint in Android
5. Pitching yorkpy … short of good length to IPL – Part 1

To see all posts click Index of Posts

Pitching yorkpy … short of good length to IPL – Part 1

I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.
Bruce Lee

I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times, I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.
Michael Jordan

Man, it doesn’t matter where you come in to bat, the score is still zero
Viv Richards

Introduction

“If cricketr is to cricpy, then yorkr is to _____?”. Yes, you guessed it right, it is yorkpy. In this post, I introduce my 2nd python package, yorkpy, which is a python clone of my R package yorkr. This package is based on data from Cricsheet. yorkpy currently handles IPL T20 matches.

When I created cricpy, the python avatar, of my R package cricketr, see Introducing cricpy:A python package to analyze performances of cricketers, I had decided that I should avoid doing a python avatar of my R package yorkr (see Introducing cricket package yorkr: Part 1- Beaten by sheer pace!) , as it was more involved, and required the parsing of match data available as yaml files.

Just out of curiosity, I tried the python package ‘yaml’ to read the match data, and lo and behold, I was sucked into the developing the package and so, yorkpy was born. Of course, it goes without saying that, usually when I am in the thick of developing something, I occasionally wonder, why I am doing it, for whom and for what purpose? Maybe it is the joy of ideation, the problem-solving,  the programmer’s high, for sharing my ideas etc. Anyway, whatever be the reason, I hope you enjoy this post and also find yorkpy useful.

You can clone/download the code at Github yorkpy
This post has been published to RPubs at yorkpy-Part1
You can download this post as PDF at IPLT20-yorkpy-part1

Note: If you would like to do a similar analysis for a different set of batsman and bowlers, you can clone/download my skeleton yorkpy-template from Github (which is the R Markdown file I have used for the analysis below).

The IPL T20 functions in yorkpy are

2. Install the package using ‘pip install’

import pandas as pd
import yorkpy.analytics as yka
#pip install yorkpy

3. Load a yaml file from Cricsheet

There are 2 functions that can be to convert the IPL Twenty20 yaml files to pandas dataframeare

  1. convertYaml2PandasDataframeT20
  2. convertAllYaml2PandasDataframesT20

Note 1: While I have already converted the IPL T20 files, you will need to use these functions for future IPL matches

4. Convert and save IPL T20 yaml file to pandas dataframe

This function will convert a IPL T20 IPL yaml file, in the format as specified in Cricsheet to pandas dataframe. This will be saved as as CSV file in the target directory. The name of the file wil have the following format team1-team2-date.csv. The IPL T20 zip file can be downloaded from Indian Premier League matches.  An example of how a yaml file can be converted to a dataframe and saved is shown below.

import pandas as pd
import yorkpy.analytics as yka
#convertYaml2PandasDataframe(".\\1082593.yaml","..\ipl", ..\\data")

5. Convert and save all IPL T20 yaml files to dataframes

This function will convert all IPL T20 yaml files from a source directory to dataframes, and save it in the target directory, with the names as mentioned above. Since I have already done this, I will not be executing this again. You can download the zip of all the converted RData files from Github at yorkpyData

import pandas as pd
import yorkpy.analytics as yka
#convertAllYaml2PandasDataframes("..\\ipl", "..\\data")

You can download the the zip of the files and use it directly in the functions as follows.For the analysis below I chosen a set of random IPL matches

The randomly selected IPL T20 matches are

  • Chennai Super Kings vs Kings Xi Punjab, 2014-05-30
  • Deccan Chargers vs Delhi Daredevils, 2012-05-10
  • Gujarat Lions vs Mumbai Indians, 2017-04-29
  • Kolkata Knight Riders vs Rajasthan Royals, 2010-04-17
  • Rising Pune Supergiants vs Royal Challengers Bangalore, 2017-04-29

6. Team batting scorecard

The function below computes the batting score card of a team in an IPL match. The scorecard gives the balls faced, the runs scored, 4s, 6s and strike rate. The example below is based on the CSK KXIP match on 30 May 2014.

You can check against the actual scores in this match Chennai Super Kings-Kings XI Punjab-2014-05-30

import pandas as pd
import yorkpy.analytics as yka
csk_kxip=pd.read_csv(".\\Chennai Super Kings-Kings XI Punjab-2014-05-30.csv")
scorecard,extras=yka.teamBattingScorecardMatch(csk_kxip,"Chennai Super Kings")
print(scorecard)
##         batsman  runs  balls  4s  6s          SR
## 0      DR Smith     7     12   0   0   58.333333
## 1  F du Plessis     0      1   0   0    0.000000
## 2      SK Raina    87     26  12   6  334.615385
## 3   BB McCullum    11     16   0   0   68.750000
## 4     RA Jadeja    27     22   2   1  122.727273
## 5     DJ Hussey     1      3   0   0   33.333333
## 6      MS Dhoni    42     34   3   3  123.529412
## 7      R Ashwin    10     11   0   0   90.909091
## 8     MM Sharma     1      3   0   0   33.333333
print(extras)
##    total  wides  noballs  legbyes  byes  penalty  extras
## 0    428     14        3        5     5        0      27
print("\n\n")
scorecard1,extras1=yka.teamBattingScorecardMatch(csk_kxip,"Kings XI Punjab")
print(scorecard1)
##       batsman  runs  balls  4s  6s          SR
## 0    V Sehwag   122     62  12   8  196.774194
## 1     M Vohra    34     33   1   2  103.030303
## 2  GJ Maxwell    13      8   1   1  162.500000
## 3   DA Miller    38     19   5   1  200.000000
## 4   GJ Bailey     1      2   0   0   50.000000
## 5     WP Saha     6      4   0   1  150.000000
## 6  MG Johnson     1      1   0   0  100.000000
print(extras1)
##    total  wides  noballs  legbyes  byes  penalty  extras
## 0    428     14        3        5     5        0      27

Let’s take another random match between Gujarat Lions and Mumbai Indian on 29 Apr 2017 Gujarat Lions-Mumbai Indians-2017-04-29

import pandas as pd
gl_mi=pd.read_csv(".\\Gujarat Lions-Mumbai Indians-2017-04-29.csv")
import yorkpy.analytics as yka
scorecard,extras=yka.teamBattingScorecardMatch(gl_mi,"Gujarat Lions")
print(scorecard)
##          batsman  runs  balls  4s  6s          SR
## 0   Ishan Kishan    48     38   6   2  126.315789
## 1    BB McCullum     6      4   1   0  150.000000
## 2       SK Raina     1      3   0   0   33.333333
## 3       AJ Finch     0      3   0   0    0.000000
## 4     KD Karthik     2      9   0   0   22.222222
## 5      RA Jadeja    28     22   2   1  127.272727
## 6    JP Faulkner    21     29   2   0   72.413793
## 7      IK Pathan     2      3   0   0   66.666667
## 8         AJ Tye    25     12   2   2  208.333333
## 9   Basil Thampi     2      4   0   0   50.000000
## 10    Ankit Soni     7      2   0   1  350.000000
print(extras)
##    total  wides  noballs  legbyes  byes  penalty  extras
## 0    306      8        3        1     0        0      12
print("\n\n")
scorecard1,extras1=yka.teamBattingScorecardMatch(gl_mi,"Mumbai Indians")
print(scorecard1)
##             batsman  runs  balls  4s  6s          SR
## 0          PA Patel    70     45   9   1  155.555556
## 1        JC Buttler     9      7   2   0  128.571429
## 2            N Rana    19     16   1   1  118.750000
## 3         RG Sharma     5     13   0   0   38.461538
## 4        KA Pollard    15     11   2   0  136.363636
## 5         KH Pandya    29     20   2   1  145.000000
## 6         HH Pandya     4      5   0   0   80.000000
## 7   Harbhajan Singh     0      1   0   0    0.000000
## 8    MJ McClenaghan     1      1   0   0  100.000000
## 9         JJ Bumrah     0      1   0   0    0.000000
## 10       SL Malinga     0      1   0   0    0.000000
print(extras1)
##    total  wides  noballs  legbyes  byes  penalty  extras
## 0    306      8        3        1     0        0      12

7. Plot the team batting partnerships

The functions below plot the team batting partnership in the match. It shows what the partnership were in the mtach

Note: Many of the plots include an additional parameters plot which is either True or False. The default value is plot=True. When plot=True the plot will be displayed. When plot=False the data frame will be returned to the user. The user can use this to create an interactive chart using one of the packages like rcharts, ggvis,googleVis or plotly.

import pandas as pd
import yorkpy.analytics as yka
dc_dd=pd.read_csv(".\\Deccan Chargers-Delhi Daredevils-2012-05-10.csv")
yka.teamBatsmenPartnershipMatch(dc_dd,'Deccan Chargers','Delhi Daredevils')

yka.teamBatsmenPartnershipMatch(dc_dd,'Delhi Daredevils','Deccan Chargers',plot=True)
# Print partnerships as a dataframe

rps_rcb=pd.read_csv(".\\Rising Pune Supergiant-Royal Challengers Bangalore-2017-04-29.csv")
m=yka.teamBatsmenPartnershipMatch(rps_rcb,'Royal Challengers Bangalore','Rising Pune Supergiant',plot=False)
print(m)
##            batsman     non_striker  runs
## 0   AB de Villiers         V Kohli     3
## 1         AF Milne         V Kohli     5
## 2        KM Jadhav         V Kohli     7
## 3           P Negi         V Kohli     3
## 4        S Aravind         V Kohli     0
## 5        S Aravind       YS Chahal     8
## 6         S Badree         V Kohli     2
## 7        STR Binny         V Kohli     1
## 8      Sachin Baby         V Kohli     2
## 9          TM Head         V Kohli     2
## 10         V Kohli  AB de Villiers    17
## 11         V Kohli        AF Milne     5
## 12         V Kohli       KM Jadhav     4
## 13         V Kohli          P Negi     9
## 14         V Kohli       S Aravind     2
## 15         V Kohli        S Badree     8
## 16         V Kohli     Sachin Baby     1
## 17         V Kohli         TM Head     9
## 18       YS Chahal       S Aravind     4

8. Batsmen vs Bowler

The function below computes and plots the performances of the batsmen vs the bowlers. As before the plot parameter can be set to True or False. By default it is plot=True

import pandas as pd
import yorkpy.analytics as yka
gl_mi=pd.read_csv(".\\Gujarat Lions-Mumbai Indians-2017-04-29.csv")
yka.teamBatsmenVsBowlersMatch(gl_mi,"Gujarat Lions","Mumbai Indians", plot=True)
# Print 

csk_kxip=pd.read_csv(".\\Chennai Super Kings-Kings XI Punjab-2014-05-30.csv")
m=yka.teamBatsmenVsBowlersMatch(csk_kxip,'Chennai Super Kings','Kings XI Punjab',plot=False)
print(m)
##          batsman           bowler  runs
## 0    BB McCullum         AR Patel     4
## 1    BB McCullum       GJ Maxwell     1
## 2    BB McCullum  Karanveer Singh     6
## 3      DJ Hussey          P Awana     1
## 4       DR Smith       MG Johnson     7
## 5       DR Smith          P Awana     0
## 6       DR Smith   Sandeep Sharma     0
## 7   F du Plessis       MG Johnson     0
## 8      MM Sharma         AR Patel     0
## 9      MM Sharma       MG Johnson     0
## 10     MM Sharma          P Awana     1
## 11      MS Dhoni         AR Patel    12
## 12      MS Dhoni  Karanveer Singh     2
## 13      MS Dhoni       MG Johnson    11
## 14      MS Dhoni          P Awana    15
## 15      MS Dhoni   Sandeep Sharma     2
## 16      R Ashwin         AR Patel     1
## 17      R Ashwin  Karanveer Singh     4
## 18      R Ashwin       MG Johnson     1
## 19      R Ashwin          P Awana     1
## 20      R Ashwin   Sandeep Sharma     3
## 21     RA Jadeja         AR Patel     5
## 22     RA Jadeja       GJ Maxwell     3
## 23     RA Jadeja  Karanveer Singh    19
## 24     RA Jadeja          P Awana     0
## 25      SK Raina       MG Johnson    21
## 26      SK Raina          P Awana    40
## 27      SK Raina   Sandeep Sharma    26

9. Bowling Scorecard

This function provides the bowling performance, the number of overs bowled, maidens, runs conceded. wickets taken and economy rate for the IPL match

import pandas as pd
import yorkpy.analytics as yka
dc_dd=pd.read_csv(".\\Deccan Chargers-Delhi Daredevils-2012-05-10.csv")
a=yka.teamBowlingScorecardMatch(dc_dd,'Deccan Chargers')
print(a)
##        bowler  overs  runs  maidens  wicket  econrate
## 0  AD Russell      4    39        0       0      9.75
## 1   IK Pathan      4    46        0       1     11.50
## 2    M Morkel      4    32        0       1      8.00
## 3    S Nadeem      4    39        0       0      9.75
## 4    VR Aaron      4    30        0       2      7.50
rps_rcb=pd.read_csv(".\\Rising Pune Supergiant-Royal Challengers Bangalore-2017-04-29.csv")
b=yka.teamBowlingScorecardMatch(rps_rcb,'Royal Challengers Bangalore')
print(b)
##               bowler  overs  runs  maidens  wicket  econrate
## 0          DL Chahar      2    18        0       0      9.00
## 1       DT Christian      4    25        0       1      6.25
## 2        Imran Tahir      4    18        0       3      4.50
## 3         JD Unadkat      4    19        0       1      4.75
## 4        LH Ferguson      4     7        1       3      1.75
## 5  Washington Sundar      2     7        0       1      3.50

10. Wicket Kind

The plots below provide the kind of wicket taken by the bowler (caught, bowled, lbw etc.) for the IPL match

import pandas as pd
import yorkpy.analytics as yka
kkr_rr=pd.read_csv(".\\Kolkata Knight Riders-Rajasthan Royals-2010-04-17.csv")
yka.teamBowlingWicketKindMatch(kkr_rr,'Kolkata Knight Riders','Rajasthan Royals')

csk_kxip=pd.read_csv(".\\Chennai Super Kings-Kings XI Punjab-2014-05-30.csv")
m = yka.teamBowlingWicketKindMatch(csk_kxip,'Chennai Super Kings','Kings-Kings XI Punjab',plot=False)
print(m)
##             bowler     kind  player_out
## 0         AR Patel  run out           1
## 1         AR Patel  stumped           1
## 2  Karanveer Singh  run out           1
## 3       MG Johnson   caught           1
## 4          P Awana   caught           2
## 5   Sandeep Sharma   bowled           1

11. Wicket vs Runs conceded

The plots below provide the wickets taken and the runs conceded by the bowler in the IPL T20 match

import pandas as pd
import yorkpy.analytics as yka
dc_dd=pd.read_csv(".\\Deccan Chargers-Delhi Daredevils-2012-05-10.csv")
yka.teamBowlingWicketMatch(dc_dd,"Deccan Chargers", "Delhi Daredevils",plot=True)

print("\n\n")
rps_rcb=pd.read_csv(".\\Rising Pune Supergiant-Royal Challengers Bangalore-2017-04-29.csv")
a=yka.teamBowlingWicketMatch(rps_rcb,"Royal Challengers Bangalore", "Rising Pune Supergiant",plot=False)
print(a)
##               bowler      player_out  kind
## 0       DT Christian         V Kohli     1
## 1        Imran Tahir        AF Milne     1
## 2        Imran Tahir          P Negi     1
## 3        Imran Tahir        S Badree     1
## 4         JD Unadkat         TM Head     1
## 5        LH Ferguson  AB de Villiers     1
## 6        LH Ferguson       KM Jadhav     1
## 7        LH Ferguson       STR Binny     1
## 8  Washington Sundar     Sachin Baby     1

12. Bowler Vs Batsmen

The functions compute and display how the different bowlers of the IPL team performed against the batting opposition.

import pandas as pd
import yorkpy.analytics as yka
csk_kxip=pd.read_csv(".\\Chennai Super Kings-Kings XI Punjab-2014-05-30.csv")
yka.teamBowlersVsBatsmenMatch(csk_kxip,"Chennai Super Kings","Kings XI Punjab")

print("\n\n")
kkr_rr=pd.read_csv(".\\Kolkata Knight Riders-Rajasthan Royals-2010-04-17.csv")
m =yka.teamBowlersVsBatsmenMatch(kkr_rr,"Rajasthan Royals","Kolkata Knight Riders",plot=False)
print(m)
##        batsman      bowler  runs
## 0     AC Voges    AB Dinda     1
## 1     AC Voges  JD Unadkat     1
## 2     AC Voges   LR Shukla     1
## 3     AC Voges    M Kartik     5
## 4     AJ Finch    AB Dinda     3
## 5     AJ Finch  JD Unadkat     3
## 6     AJ Finch   LR Shukla    13
## 7     AJ Finch    M Kartik     2
## 8     AJ Finch     SE Bond     0
## 9      AS Raut    AB Dinda     1
## 10     AS Raut  JD Unadkat     1
## 11    FY Fazal    AB Dinda     1
## 12    FY Fazal   LR Shukla     3
## 13    FY Fazal    M Kartik     3
## 14    FY Fazal     SE Bond     6
## 15     NV Ojha    AB Dinda    10
## 16     NV Ojha  JD Unadkat     5
## 17     NV Ojha   LR Shukla     0
## 18     NV Ojha    M Kartik     1
## 19     NV Ojha     SE Bond     2
## 20     P Dogra  JD Unadkat     2
## 21     P Dogra   LR Shukla     5
## 22     P Dogra    M Kartik     1
## 23     P Dogra     SE Bond     0
## 24  SK Trivedi    AB Dinda     4
## 25    SK Warne    AB Dinda     2
## 26    SK Warne    M Kartik     1
## 27    SK Warne     SE Bond     0
## 28   SR Watson    AB Dinda     2
## 29   SR Watson  JD Unadkat    13
## 30   SR Watson   LR Shukla     1
## 31   SR Watson    M Kartik    18
## 32   SR Watson     SE Bond    10
## 33   YK Pathan  JD Unadkat     1
## 34   YK Pathan   LR Shukla     7

13. Match worm chart

The plots below provide the match worm graph for the IPL Twenty 20 matches

import pandas as pd
import yorkpy.analytics as yka
dc_dd=pd.read_csv(".\\Deccan Chargers-Delhi Daredevils-2012-05-10.csv")
yka.matchWormChart(dc_dd,"Deccan Chargers", "Delhi Daredevils")

gl_mi=pd.read_csv(".\\Gujarat Lions-Mumbai Indians-2017-04-29.csv")
yka.matchWormChart(gl_mi,"Mumbai Indians","Gujarat Lions")

Feel free to clone/download the code from Github yorkpy

Conclusion

This post included all functions between 2 IPL teams from the package yorkpy for IPL Twenty20 matches. As mentioned above the yaml match files have been already converted to dataframes and are available for download from Github at yorkpyData

After having used Python and R for analytics, Machine Learning and Deep Learning, I have now realized that neither language is superior or inferior. Both have, some good packages and some that are not so well suited.

To be continued. Watch this space!

Important note: Do check out my other posts using yorkpy at yorkpy-posts

You may also like
1.My book ‘Deep Learning from first principles:Second Edition’ now on Amazon
2.My book ‘Practical Machine Learning in R and Python: Second edition’ on Amazon
2. Cricpy takes a swing at the ODIs
3. Introducing cricket package yorkr: Part 1- Beaten by sheer pace!
4. Big Data-1: Move into the big league:Graduate from Python to Pyspark
5. Simulating an Edge Shape in Android

To see all posts click Index of posts

Analyzing batsmen and bowlers with cricpy template

Introduction

This post shows how you can analyze batsmen and bowlers of Test, ODI and T20s using cricpy templates, using data from ESPN Cricinfo.

The cricpy package

The data for a particular player can be obtained with the getPlayerData() function. To do you will need to go to ESPN CricInfo Player and type in the name of the player for e.g Rahul Dravid, Virat Kohli  etc. This will bring up a page which have the profile number for the player e.g. for Rahul Dravid this would be http://www.espncricinfo.com/india/content/player/28114.html. Hence, Dravid’s profile is 28114. This can be used to get the data for Rahul Dravid as shown below

1. For Test players use batting and bowling.
2. For ODI use batting and bowling
3. For T20 use T20 Batting T20 Bowling

Please mindful of the  ESPN Cricinfo Terms of Use

My posts on Cripy were
a. Introducing cricpy:A python package to analyze performances of cricketers
b. Cricpy takes a swing at the ODIs
c. Cricpy takes guard for the Twenty20s

You can clone/download this cricpy template for your own analysis of players. This can be done using RStudio or IPython notebooks from Github at cricpy-template. You can uncomment the functions and use them.

Cricpy can now analyze performances of teams in Test, ODI and T20 cricket see Cricpy adds team analytics to its arsenal!!

This post is also hosted on Rpubs at Int

The cricpy package is now available with pip install cricpy!!!

If you are passionate about cricket, and love analyzing cricket performances, then check out my racy book on cricket ‘Cricket analytics with cricketr and cricpy – Analytics harmony with R & Python’! This book discusses and shows how to use my R package ‘cricketr’ and my Python package ‘cricpy’ to analyze batsmen and bowlers in all formats of the game (Test, ODI and T20). The paperback is available on Amazon at $21.99 and  the kindle version at $9.99/Rs 449/-. A must read for any cricket lover! Check it out!!

Untitled

1 Importing cricpy – Python

# Install the package
# Do a pip install cricpy
# Import cricpy
import cricpy.analytics as ca 
## C:\Users\Ganesh\ANACON~1\lib\site-packages\statsmodels\compat\pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
##   from pandas.core import datetools

2. Invoking functions with Python package cricpy

import cricpy.analytics as ca 
#ca.batsman4s("aplayer.csv","A Player")

3. Getting help from cricpy – Python

import cricpy.analytics as ca
#help(ca.getPlayerData)

The details below will introduce the different functions that are available in cricpy.

4. Get the player data for a player using the function getPlayerData()

Important Note This needs to be done only once for a player. This function stores the player’s data in the specified CSV file (for e.g. dravid.csv as above) which can then be reused for all other functions). Once we have the data for the players many analyses can be done. This post will use the stored CSV file obtained with a prior getPlayerData for all subsequent analyses

4a. For Test players

import cricpy.analytics as ca
#player1 =ca.getPlayerData(profileNo1,dir="..",file="player1.csv",type="batting",homeOrAway=[1,2], result=[1,2,4])
#player1 =ca.getPlayerData(profileNo2,dir="..",file="player2.csv",type="batting",homeOrAway=[1,2], result=[1,2,4])

4b. For ODI players

import cricpy.analytics as ca
#player1 =ca.getPlayerDataOD(profileNo1,dir="..",file="player1.csv",type="batting")
#player1 =ca.getPlayerDataOD(profileNo2,dir="..",file="player2.csv",type="batting"")

4c For T20 players

import cricpy.analytics as ca
#player1 =ca.getPlayerDataTT(profileNo1,dir="..",file="player1.csv",type="batting")
#player1 =ca.getPlayerDataTT(profileNo2,dir="..",file="player2.csv",type="batting"")

5 A Player’s performance – Basic Analyses

The 3 plots below provide the following for Rahul Dravid

  1. Frequency percentage of runs in each run range over the whole career
  2. Mean Strike Rate for runs scored in the given range
  3. A histogram of runs frequency percentages in runs ranges
import cricpy.analytics as ca
import matplotlib.pyplot as plt
#ca.batsmanRunsFreqPerf("aplayer.csv","A Player")
#ca.batsmanMeanStrikeRate("aplayer.csv","A Player")
#ca.batsmanRunsRanges("aplayer.csv","A Player") 

6. More analyses

This gives details on the batsmen’s 4s, 6s and dismissals

import cricpy.analytics as ca
#ca.batsman4s("aplayer.csv","A Player")
#ca.batsman6s("aplayer.csv","A Player") 
#ca.batsmanDismissals("aplayer.csv","A Player")
# The below function is for ODI and T20 only
#ca.batsmanScoringRateODTT("./kohli.csv","Virat Kohli")  

7. 3D scatter plot and prediction plane

The plots below show the 3D scatter plot of Runs versus Balls Faced and Minutes at crease. A linear regression plane is then fitted between Runs and Balls Faced + Minutes at crease

import cricpy.analytics as ca
#ca.battingPerf3d("aplayer.csv","A Player")

8. Average runs at different venues

The plot below gives the average runs scored at different grounds. The plot also the number of innings at each ground as a label at x-axis.

import cricpy.analytics as ca
#ca.batsmanAvgRunsGround("aplayer.csv","A Player")

9. Average runs against different opposing teams

This plot computes the average runs scored against different countries.

import cricpy.analytics as ca
#ca.batsmanAvgRunsOpposition("aplayer.csv","A Player")

10. Highest Runs Likelihood

The plot below shows the Runs Likelihood for a batsman.

import cricpy.analytics as ca
#ca.batsmanRunsLikelihood("aplayer.csv","A Player")

11. A look at the Top 4 batsman

Choose any number of players

1.Player1 2.Player2 3.Player3 …

The following plots take a closer at their performances. The box plots show the median the 1st and 3rd quartile of the runs

12. Box Histogram Plot

This plot shows a combined boxplot of the Runs ranges and a histogram of the Runs Frequency

import cricpy.analytics as ca
#ca.batsmanPerfBoxHist("aplayer001.csv","A Player001")
#ca.batsmanPerfBoxHist("aplayer002.csv","A Player002")
#ca.batsmanPerfBoxHist("aplayer003.csv","A Player003")
#ca.batsmanPerfBoxHist("aplayer004.csv","A Player004")

13. Get Player Data special

import cricpy.analytics as ca
#player1sp = ca.getPlayerDataSp(profile1,tdir=".",tfile="player1sp.csv",ttype="batting")
#player2sp = ca.getPlayerDataSp(profile2,tdir=".",tfile="player2sp.csv",ttype="batting")
#player3sp = ca.getPlayerDataSp(profile3,tdir=".",tfile="player3sp.csv",ttype="batting")
#player4sp = ca.getPlayerDataSp(profile4,tdir=".",tfile="player4sp.csv",ttype="batting")

14. Contribution to won and lost matches

Note:This can only be used for Test matches

import cricpy.analytics as ca
#ca.batsmanContributionWonLost("player1sp.csv","A Player001")
#ca.batsmanContributionWonLost("player2sp.csv","A Player002")
#ca.batsmanContributionWonLost("player3sp.csv","A Player003")
#ca.batsmanContributionWonLost("player4sp.csv","A Player004")

15. Performance at home and overseas

Note:This can only be used for Test matches This function also requires the use of getPlayerDataSp() as shown above

import cricpy.analytics as ca
#ca.batsmanPerfHomeAway("player1sp.csv","A Player001")
#ca.batsmanPerfHomeAway("player2sp.csv","A Player002")
#ca.batsmanPerfHomeAway("player3sp.csv","A Player003")
#ca.batsmanPerfHomeAway("player4sp.csv","A Player004")

16 Moving Average of runs in career

import cricpy.analytics as ca
#ca.batsmanMovingAverage("aplayer001.csv","A Player001")
#ca.batsmanMovingAverage("aplayer002.csv","A Player002")
#ca.batsmanMovingAverage("aplayer003.csv","A Player003")
#ca.batsmanMovingAverage("aplayer004.csv","A Player004")

17 Cumulative Average runs of batsman in career

This function provides the cumulative average runs of the batsman over the career.

import cricpy.analytics as ca
#ca.batsmanCumulativeAverageRuns("aplayer001.csv","A Player001")
#ca.batsmanCumulativeAverageRuns("aplayer002.csv","A Player002")
#ca.batsmanCumulativeAverageRuns("aplayer003.csv","A Player003")
#ca.batsmanCumulativeAverageRuns("aplayer004.csv","A Player004")

18 Cumulative Average strike rate of batsman in career

.

import cricpy.analytics as ca
#ca.batsmanCumulativeStrikeRate("aplayer001.csv","A Player001")
#ca.batsmanCumulativeStrikeRate("aplayer002.csv","A Player002")
#ca.batsmanCumulativeStrikeRate("aplayer003.csv","A Player003")
#ca.batsmanCumulativeStrikeRate("aplayer004.csv","A Player004")

19 Future Runs forecast

import cricpy.analytics as ca
#ca.batsmanPerfForecast("aplayer001.csv","A Player001")

20 Relative Batsman Cumulative Average Runs

The plot below compares the Relative cumulative average runs of the batsman for each of the runs ranges of 10 and plots them.

import cricpy.analytics as ca
frames = ["aplayer1.csv","aplayer2.csv","aplayer3.csv","aplayer4.csv"]
names = ["A Player1","A Player2","A Player3","A Player4"]
#ca.relativeBatsmanCumulativeAvgRuns(frames,names)

21 Plot of 4s and 6s

import cricpy.analytics as ca
frames = ["aplayer1.csv","aplayer2.csv","aplayer3.csv","aplayer4.csv"]
names = ["A Player1","A Player2","A Player3","A Player4"]
#ca.batsman4s6s(frames,names)

22. Relative Batsman Strike Rate

The plot below gives the relative Runs Frequency Percetages for each 10 run bucket. The plot below show

import cricpy.analytics as ca
frames = ["aplayer1.csv","aplayer2.csv","aplayer3.csv","aplayer4.csv"]
names = ["A Player1","A Player2","A Player3","A Player4"]
#ca.relativeBatsmanCumulativeStrikeRate(frames,names)

23. 3D plot of Runs vs Balls Faced and Minutes at Crease

The plot is a scatter plot of Runs vs Balls faced and Minutes at Crease. A prediction plane is fitted

import cricpy.analytics as ca
#ca.battingPerf3d("aplayer001.csv","A Player001")
#ca.battingPerf3d("aplayer002.csv","A Player002")
#ca.battingPerf3d("aplayer003.csv","A Player003")
#ca.battingPerf3d("aplayer004.csv","A Player004")

24. Predicting Runs given Balls Faced and Minutes at Crease

A multi-variate regression plane is fitted between Runs and Balls faced +Minutes at crease.

import cricpy.analytics as ca
import numpy as np
import pandas as pd
BF = np.linspace( 10, 400,15)
Mins = np.linspace( 30,600,15)
newDF= pd.DataFrame({'BF':BF,'Mins':Mins})
#aplayer = ca.batsmanRunsPredict("aplayer.csv",newDF,"A Player")
#print(aplayer)

The fitted model is then used to predict the runs that the batsmen will score for a given Balls faced and Minutes at crease.

25 Analysis of Top 3 wicket takers

Take any number of bowlers from either Test, ODI or T20

  1. Bowler1
  2. Bowler2
  3. Bowler3 …

26. Get the bowler’s data (Test)

This plot below computes the percentage frequency of number of wickets taken for e.g 1 wicket x%, 2 wickets y% etc and plots them as a continuous line

import cricpy.analytics as ca
#abowler1 =ca.getPlayerData(profileNo1,dir=".",file="abowler1.csv",type="bowling",homeOrAway=[1,2], result=[1,2,4])
#abowler2 =ca.getPlayerData(profileNo2,dir=".",file="abowler2.csv",type="bowling",homeOrAway=[1,2], result=[1,2,4])
#abowler3 =ca.getPlayerData(profile3,dir=".",file="abowler3.csv",type="bowling",homeOrAway=[1,2], result=[1,2,4])

26b For ODI bowlers

import cricpy.analytics as ca
#abowler1 =ca.getPlayerDataOD(profileNo1,dir=".",file="abowler1.csv",type="bowling")
#abowler2 =ca.getPlayerDataOD(profileNo2,dir=".",file="abowler2.csv",type="bowling")
#abowler3 =ca.getPlayerDataOD(profile3,dir=".",file="abowler3.csv",type="bowling")

26c For T20 bowlers

import cricpy.analytics as ca
#abowler1 =ca.getPlayerDataTT(profileNo1,dir=".",file="abowler1.csv",type="bowling")
#abowler2 =ca.getPlayerDataTT(profileNo2,dir=".",file="abowler2.csv",type="bowling")
#abowler3 =ca.getPlayerDataTT(profile3,dir=".",file="abowler3.csv",type="bowling")

27. Wicket Frequency Plot

This plot below plots the frequency of wickets taken for each of the bowlers

import cricpy.analytics as ca
#ca.bowlerWktsFreqPercent("abowler1.csv","A Bowler1")
#ca.bowlerWktsFreqPercent("abowler2.csv","A Bowler2")
#ca.bowlerWktsFreqPercent("abowler3.csv","A Bowler3")

28. Wickets Runs plot

The plot below create a box plot showing the 1st and 3rd quartile of runs conceded versus the number of wickets taken

import cricpy.analytics as ca
#ca.bowlerWktsRunsPlot("abowler1.csv","A Bowler1")
#ca.bowlerWktsRunsPlot("abowler2.csv","A Bowler2")
#ca.bowlerWktsRunsPlot("abowler3.csv","A Bowler3")

29 Average wickets at different venues

The plot gives the average wickets taken bat different venues.

import cricpy.analytics as ca
#ca.bowlerAvgWktsGround("abowler1.csv","A Bowler1")
#ca.bowlerAvgWktsGround("abowler2.csv","A Bowler2")
#ca.bowlerAvgWktsGround("abowler3.csv","A Bowler3")

30 Average wickets against different opposition

The plot gives the average wickets taken against different countries.

import cricpy.analytics as ca
#ca.bowlerAvgWktsOpposition("abowler1.csv","A Bowler1")
#ca.bowlerAvgWktsOpposition("abowler2.csv","A Bowler2")
#ca.bowlerAvgWktsOpposition("abowler3.csv","A Bowler3")

31 Wickets taken moving average

import cricpy.analytics as ca
#ca.bowlerMovingAverage("abowler1.csv","A Bowler1")
#ca.bowlerMovingAverage("abowler2.csv","A Bowler2")
#ca.bowlerMovingAverage("abowler3.csv","A Bowler3")

32 Cumulative average wickets taken

The plots below give the cumulative average wickets taken by the bowlers.

import cricpy.analytics as ca
#ca.bowlerCumulativeAvgWickets("abowler1.csv","A Bowler1")
#ca.bowlerCumulativeAvgWickets("abowler2.csv","A Bowler2")
#ca.bowlerCumulativeAvgWickets("abowler3.csv","A Bowler3")

33 Cumulative average economy rate

The plots below give the cumulative average economy rate of the bowlers.

import cricpy.analytics as ca
#ca.bowlerCumulativeAvgEconRate("abowler1.csv","A Bowler1")
#ca.bowlerCumulativeAvgEconRate("abowler2.csv","A Bowler2")
#ca.bowlerCumulativeAvgEconRate("abowler3.csv","A Bowler3")

34 Future Wickets forecast

import cricpy.analytics as ca
#ca.bowlerPerfForecast("abowler1.csv","A bowler1")

35 Get player data special

import cricpy.analytics as ca
#abowler1sp =ca.getPlayerDataSp(profile1,tdir=".",tfile="abowler1sp.csv",ttype="bowling")
#abowler2sp =ca.getPlayerDataSp(profile2,tdir=".",tfile="abowler2sp.csv",ttype="bowling")
#abowler3sp =ca.getPlayerDataSp(profile3,tdir=".",tfile="abowler3sp.csv",ttype="bowling")

36 Contribution to matches won and lost

Note:This can be done only for Test cricketers

import cricpy.analytics as ca
#ca.bowlerContributionWonLost("abowler1sp.csv","A Bowler1")
#ca.bowlerContributionWonLost("abowler2sp.csv","A Bowler2")
#ca.bowlerContributionWonLost("abowler3sp.csv","A Bowler3")

37 Performance home and overseas

Note:This can be done only for Test cricketers

import cricpy.analytics as ca
#ca.bowlerPerfHomeAway("abowler1sp.csv","A Bowler1")
#ca.bowlerPerfHomeAway("abowler2sp.csv","A Bowler2")
#ca.bowlerPerfHomeAway("abowler3sp.csv","A Bowler3")

38 Relative cumulative average economy rate of bowlers

import cricpy.analytics as ca
frames = ["abowler1.csv","abowler2.csv","abowler3.csv"]
names = ["A Bowler1","A Bowler2","A Bowler3"]
#ca.relativeBowlerCumulativeAvgEconRate(frames,names)

39 Relative Economy Rate against wickets taken

import cricpy.analytics as ca
frames = ["abowler1.csv","abowler2.csv","abowler3.csv"]
names = ["A Bowler1","A Bowler2","A Bowler3"]
#ca.relativeBowlingER(frames,names)

40 Relative cumulative average wickets of bowlers in career

import cricpy.analytics as ca
frames = ["abowler1.csv","abowler2.csv","abowler3.csv"]
names = ["A Bowler1","A Bowler2","A Bowler3"]
#ca.relativeBowlerCumulativeAvgWickets(frames,names)

Clone/download this cricpy template for your own analysis of players. This can be done using RStudio or IPython notebooks from Github at cricpy-template

Important note: Do check out my other posts using cricpy at cricpy-posts

Key Findings

Analysis of Top 4 batsman

Analysis of Top 3 bowlers

You may also like
1. My book ‘Deep Learning from first principles:Second Edition’ now on Amazon
2. Presentation on ‘Evolution to LTE’
3. Stacks of protocol stacks – A primer
4. Taking baby steps in Lisp
5. Introducing cricket package yorkr: Part 1- Beaten by sheer pace!

To see all posts click Index of posts