My book ‘Practical Machine Learning in R and Python: Second edition’ on Amazon

Note: The 3rd edition of this book is now available My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon

The third edition of my book ‘Practical Machine Learning with R and Python – Machine Learning in stereo’ is now available in both paperback ($12.99) and kindle ($9.99/Rs449) versions.  This second edition includes more content,  extensive comments and formatting for better readability.

In this book I implement some of the most common, but important Machine Learning algorithms in R and equivalent Python code.
1. Practical machine with R and Python: Third Edition – Machine Learning in Stereo(Paperback-$12.99)
2. Practical machine with R and Third Edition – Machine Learning in Stereo(Kindle- $9.99/Rs449)

This book is ideal both for beginners and the experts in R and/or Python. Those starting their journey into datascience and ML will find the first 3 chapters useful, as they touch upon the most important programming constructs in R and Python and also deal with equivalent statements in R and Python. Those who are expert in either of the languages, R or Python, will find the equivalent code ideal for brushing up on the other language. And finally,those who are proficient in both languages, can use the R and Python implementations to internalize the ML algorithms better.

Here is a look at the topics covered

Table of Contents
Preface …………………………………………………………………………….4
Introduction ………………………………………………………………………6
1. Essential R ………………………………………………………………… 8
2. Essential Python for Datascience ……………………………………………57
3. R vs Python …………………………………………………………………81
4. Regression of a continuous variable ……………………………………….101
5. Classification and Cross Validation ………………………………………..121
6. Regression techniques and regularization ………………………………….146
7. SVMs, Decision Trees and Validation curves ………………………………191
8. Splines, GAMs, Random Forests and Boosting ……………………………222
9. PCA, K-Means and Hierarchical Clustering ………………………………258
References ……………………………………………………………………..269

Pick up your copy today!!
Hope you have a great time learning as I did while implementing these algorithms!

Presentation on ‘Machine Learning in plain English – Part 3

This is the 3rd and final part of Machine Learning in plain English -Part 3. In this presentation, I discuss the intuition behind SVMs, B-Splines, GAMs, Decision Trees, Random Forest and Gradient Boosting. Also I touch upon Unsupervised Learning, specifically PCA and K-Means. As before the presentation does not include any math or programming. The presentation can be seen below


The implementations of all the discussed algorithm are are available in my book which is available on Amazon My book ‘Practical Machine Learning with R and Python’ on Amazon

You may also like
1. My TEDx talk on the “Internet of Things”
2. Deep Learning from first principles in Python, R and Octave – Part 2
3. De-blurring revisited with Wiener filter using OpenCV
4. Architecting a cloud based IP Multimedia System (IMS)
5.The 3rd paperback & kindle editions of my books on Cricket, now on Amazon

To see all posts click Index of posts

My book ‘Practical Machine Learning with R and Python’ on Amazon

Note: The 3rd edition of this book is now available My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon

My book ‘Practical Machine Learning with R and Python: Second Edition – Machine Learning in stereo’ is now available in both paperback ($10.99) and kindle ($7.99/Rs449) versions. In this book I implement some of the most common, but important Machine Learning algorithms in R and equivalent Python code. This is almost like listening to parallel channels of music in stereo!
1. Practical machine with R and Python: Third Edition – Machine Learning in Stereo(Paperback-$12.99)
2. Practical machine with R and Python Third Edition – Machine Learning in Stereo(Kindle- $8.99/Rs449)
This book is ideal both for beginners and the experts in R and/or Python. Those starting their journey into datascience and ML will find the first 3 chapters useful, as they touch upon the most important programming constructs in R and Python and also deal with equivalent statements in R and Python. Those who are expert in either of the languages, R or Python, will find the equivalent code ideal for brushing up on the other language. And finally,those who are proficient in both languages, can use the R and Python implementations to internalize the ML algorithms better.

Here is a look at the topics covered

Table of Contents
Essential R …………………………………….. 7
Essential Python for Datascience ………………..   54
R vs Python ……………………………………. 77
Regression of a continuous variable ………………. 96
Classification and Cross Validation ……………….113
Regression techniques and regularization …………. 134
SVMs, Decision Trees and Validation curves …………175
Splines, GAMs, Random Forests and Boosting …………202
PCA, K-Means and Hierarchical Clustering …………. 234

Pick up your copy today!!
Hope you have a great time learning as I did while implementing these algorithms!

My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)

Then felt I like some watcher of the skies 
When a new planet swims into his ken; 
Or like stout Cortez when with eagle eyes 
He star’d at the Pacific—and all his men 
Look’d at each other with a wild surmise— 
Silent, upon a peak in Darien. 
On First Looking into Chapman’s Homer by John Keats

The above excerpt from John Keat’s poem captures the the exhilaration that one experiences, when discovering something for the first time. This also  summarizes to some extent my own as enjoyment while pursuing Data Science, Machine Learning and the like.

I decided to write this post, as occasionally youngsters approach me and ask me where they should start their adventure in Data Science & Machine Learning. There are other times, when the ‘not-so-youngsters’ want to know what their next step should be after having done some courses. This post includes my travels through the domains of Data Science, Machine Learning, Deep Learning and (soon to be done AI).

By no means, am I an authority in this field, which is ever-widening and almost bottomless, yet I would like to share some of my experiences in this fascinating field. I include a short review of the courses I have done below. I also include alternative routes through  courses which I did not do, but are probably equally good as well.  Feel free to pick and choose any course or set of courses. Alternatively, you may prefer to read books or attend bricks-n-mortar classes, In any case,  I hope the list below will provide you with some overall direction.

All my learning in the above domains have come from MOOCs and I restrict myself to the top 3 MOOCs, or in my opinion, ‘the original MOOCs’, namely Coursera, edX or Udacity, but may throw in some courses from other online sites if they are only available there. I would recommend these 3 MOOCs over the other numerous online courses and also over face-to-face classroom courses for the following reasons. These MOOCs

  • Are taken by world class colleges and the lectures are delivered by top class Professors who have a great depth of knowledge and a wealth of experience
  • The Professors, besides delivering quality content, also point out to important tips, tricks and traps
  • You can revisit lectures in online courses anytime to refresh your memory
  • Lectures are usually short between 8 -15 mins (Personally, my attention span is around 15-20 mins at a time!)

Here is a fair warning and something quite obvious. No amount of courses, lectures or books will help if you don’t put it to use through some language like Octave, R or Python.

The journey
My trip through Data Science, Machine Learning  started with an off-chance remark,about 3 years ago,  from an old friend of mine who spoke to me about having done a few  courses at Coursera, and really liked it.  He further suggested that I should try. This was the final push which set me sailing into this vast domain.

I have included the list of the courses I have done over the past 5 years (37+ certifications completed and another 9 audited-listened only without doing the assignments). For each of the courses I have included a short review of the course, whether I think the course is mandatory, the language in which the course is based on, and finally whether I have done the course myself etc. I have also included alternative courses, which I may have not done, but which I think are equally good. Finally, I suggest some courses which I have heard of and which are very good and worth taking.

1. Machine Learning, Stanford, Prof Andrew Ng, Coursera
(Requirement: Mandatory, Language:Octave,Status:Completed)
This course provides an excellent foundation to build your Machine Learning citadel on. The course covers the mathematical details of linear, logistic and multivariate regression. There is also a good coverage of topics like Neural Networks, SVMs, Anamoly Detection, underfitting, overfitting, regularization etc. Prof Andrew Ng presents the material in a very lucid manner. It is a great course to start with. It would be a good idea to brush up  some basics of linear algebra, matrices and a little bit of calculus, specifically computing the local maxima/minima. You should be able to take this course even if you don’t know Octave as the Prof goes over the key aspects of the language.

2. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford– (Requirement:Mandatory, Language:R, Status;Completed) –
The course includes linear and polynomial regression, logistic regression. Details also include cross-validation and the bootstrap methods, how to do model selection and regularization (ridge and lasso). It also touches on non-linear models, generalized additive models, boosting and SVMs. Some unsupervised learning methods are  also discussed. The 2 Professors take turns in delivering lectures with a slight touch of humor.

3a. Data Science Specialization: Prof Roger Peng, Prof Brian Caffo & Prof Jeff Leek, John Hopkins University (Requirement: Option A, Language: R Status: Completed)
This is a comprehensive 10 module specialization based on R. This Specialization gives a very broad overview of Data Science and Machine Learning. The modules cover R programming, Statistical Inference, Practical Machine Learning, how to build R products and R packages and finally has a very good Capstone project on NLP

3b. Applied Data Science with Python Specialization: University of Michigan (Requirement: Option B, Language: Python, Status: Not done)
In this specialization I only did  the Applied Machine Learning in Python (Prof Kevyn-Collin Thomson). This is a very good course that covers a lot of Machine Learning algorithms(linear, logistic, ridge, lasso regression, knn, SVMs etc. Also included are confusion matrices, ROC curves etc. This is based on Python’s Scikit Learn

3c. Machine Learning Specialization, University Of Washington (Requirement:Option C, Language:Python, Status : Not completed). This appears to be a very good Specialization in Python

4. Statistics with R Specialization, Duke University (Requirement: Useful and a must know, Language R, Status:Not Completed)
I audited (listened only) to the following 2 modules from this Specialization.
a.Inferential Statistics
b.Linear Regression and Modeling
Both these courses are taught by Prof Mine Cetikya-Rundel who delivers her lessons with extraordinary clarity.  Her lectures are filled with many examples which she walks you through in great detail

5.Bayesian Statistics: From Concept to Data Analysis: Univ of California, Santa Cruz (Requirement: Optional, Language : R, Status:Completed)
This is an interesting course and provides an alternative point of view to frequentist approach

6. Data Science and Engineering with Spark, University of California, Berkeley, Prof Antony Joseph, Prof Ameet Talwalkar, Prof Jon Bates
(Required: Mandatory for Big Data, Status:Completed, Language; pySpark)
This specialization contains 3 modules
a.Introduction to Apache Spark
b.Distributed Machine Learning with Apache Spark
c.Big Data Analysis with Apache Spark

This is an excellent course for those who want to make an entry into Distributed Machine Learning. The exercises are fairly challenging and your code will predominantly be made of map/reduce and lambda operations as you process data that is distributed across Spark RDDs. I really liked  the part where the Prof shows how a matrix multiplication on a single machine is of the order of O(nd^2+d^3) (which is the basis of Machine Learning) is reduced to O(nd^2) by taking outer products on data which is distributed.

7. Deep Learning Prof Andrew Ng, Younes Bensouda Mourri, Kian Katanforoosh : Requirement:Mandatory,Language:Python, Tensorflow Status:Completed)

This course had 5 Modules which start from the fundamentals of Neural Networks, their derivation and vectorized Python implementation. The specialization also covers regularization, optimization techniques, mini batch normalization, Convolutional Neural Networks, Recurrent Neural Networks, LSTMs applied to a wide variety of real world problems

The modules are
a. Neural Networks and Deep Learning
In this course Prof Andrew Ng explains differential calculus, linear algebra and vectorized Python implementations of Deep Learning algorithms. The derivation for back-propagation is done and then the Prof shows how to compute a multi-layered DL network
b.Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
Deep Neural Networks can be very flexible, and come with a lots of knobs (hyper-parameters) to tune with. In this module, Prof Andrew Ng shows a systematic way to tune hyperparameters and by how much should one tune. The course also covers regularization(L1,L2,dropout), gradient descent optimization and batch normalization methods. The visualizations used to explain the momentum method, RMSprop, Adam,LR decay and batch normalization are really powerful and serve to clarify the concepts. As an added bonus,the module also includes a great introduction to Tensorflow.
c.Structuring Machine Learning Projects
A very good module with useful tips, tricks and traps that need to be considered while working on Machine Learning and Deep Learning projects
d. Convolutional Neural Networks
This domain has a lot of really cool ideas, where images represented as 3D volumes, are compressed and stretched longitudinally before applying a multi-layered deep learning neural network to this thin slice for performing classification,detection etc. The Prof provides a glimpse into this fascinating world of image classification, detection andl neural art transfer with frameworks like Keras and Tensorflow.
e. Sequence Models
In this module covers in good detail concepts like RNNs, GRUs, LSTMs, word embeddings, beam search and attention model.

8. Neural Networks for Machine Learning, Prof Geoffrey Hinton,University of Toronto
(Requirement: Mandatory, Language;Octave, Status:Completed)
This is a broad course which starts from the basic of Perceptrons, all the way to Boltzman Machines, RNNs, CNNS, LSTMs etc The course also covers regularisation, learning rate decay, momentum method etc

9.Probabilistic Graphical Models, Stanford  Prof Daphne Koller(Language:Octave, Status: Partially completed)
This has 3 courses
a.Probabilistic Graphical Models 1: Representation – Done
b.Probabilistic Graphical Models 2: Inference – To do
c.Probabilistic Graphical Models 3: Learning – To do
This course discusses how a system, which can be represented as a complex interaction
of probability distributions, will behave. This is probably the toughest course I did.  I did manage to get through the 1st module, While I felt that grasped a few things, I did not wholly understand the import of this. However I feel this is an important domain and I will definitely revisit this in future

10. Reinforcement Specialization : University of Alberta, Prof Adam White and Prof Martha White
(Requirement: Very important, Language;Python, Status: Partially Completed)
This is a set of 4 courses. I did the first 2 of the 4. Reinforcement Learning appears deceptively simple, but it is anything but simple. Definitely a very critical area to learn.

a.Fundamentals of Reinforcement Learning: This course discusses Markov models, value functions and Bellman equations and dynamic programming.
b.Sample based learning Learning methods: This course touches on Monte Carlo methods, Temporal Difference methods, Q Learning etc.

Reinforcement Learning is a must-have in your AI arsenal.

11. Tensorflow in Practice Specialization – Prof Laurence Moroney – Deep Learning.AI
(Requirement: Important, Language;Python, Status: Completed)
This is a good course but definitely do the Deep Learning Specialization by Prof Andrew Ng
There are 4 courses in this Specialization. I completed all 4 courses. They are fairly straight forward
a. Introduction to TensorFlow – This course introduces you to Tensorflow, image recognition with brute-force method
b. Convolutional Neural Networks in Tensorflow – This course touches on how to build a CNN, image augmentation, transfer learning and multi-class classification
c. Natural Language Processing in Tensorflow – Word embeddings, sentiment analysis, LSTMs, RNNs are discussed.
d. Sequences, time series and prediction – This course discusses using RNNs for time series, auto correlation

12. Natural Language Processing  Specialization – Prof Younes Bensouda, Lukasz Kaiser from DeepLearning.AI
(Requirement: Very Important, Language;Python, Status: Partially Completed)
This is the latest specialization from Deep Learning.AI. I have completed the first 2 courses
a.Natural Language Processing with Classification and Vector Spaces -The first course deals with sentiment analysis with Naive Bayes, vector space models, capturing dependencies using PCA etc
b. Natural Language Processing with Probabilistic Models – In this course techniques for auto correction, Markov models and Viterbi algorithm for Parts of Speech tagging, auto completion and word embedding are discussed.

13. Mining Massive Data Sets Prof Jure Leskovec, Prof Anand Rajaraman and ProfJeff Ullman. Online Stanford, Status Partially done.,
I did quickly audit this course, a year back, when it used to be in Coursera. It now seems to have moved to Stanford online. But this is a very good course that discusses key concepts of Mining Big Data of the order a few Petabytes

14. Introduction to Artificial Intelligence, Prof Sebastian Thrun & Prof Peter Norvig, Udacity
This is a really good course. I have started on this course a couple of times and somehow gave up. Will revisit to complete in future. Quite extensive in its coverage.Touches BFS,DFS, A-Star, PGM, Machine Learning etc.

15.Deep Learning (with TensorFlow), Vincent Vanhoucke, Principal Scientist at Google Brain.
Got started on this one and abandoned some time back. In my to do list though

My learning journey is based on Lao Tzu’s dictum of ‘A good traveler has no fixed plans and is not intent on arriving’. You could have a goal and try to plan your courses accordingly.
And so my journey continues…

I hope you find this list useful.
Have a great journey ahead!!!

The making of cricket package yorkr – Part 3

Introduction

This is the 3rd part of my cricket package yorkr in R. In my 2 earlier posts

  1. The making of cricket package yorkr – Part 1. This post analyzed the performance of team in a ODI match. The batting and bowling performances of the team were analyzed. This post also performed analyses of a country in all matches against another country for e.g. India vs All matches agianst Australia. The best performers with the bat and ball were determined, the best batting partnerships, the performances at different venues etc. The detailed performances of the bowlers of India and Australia in the confrontation were also analyzed.
  2. The making of cricket package yorkr – Part 2 This post includes all ODI matches between a country and others. For obvious reasons I have chosen India and selected all ODI matches played by India with other countries. This included batting and bowling performances of the country against all oppositions.

As mentioned in my earlier posts the data is taken from Cricsheet

If you are passionate about cricket, and love analyzing cricket performances, then check out my 2 racy books on cricket! In my books, I perform detailed yet compact analysis of performances of both batsmen, bowlers besides evaluating team & match performances in Tests , ODIs, T20s & IPL. You can buy my books on cricket from Amazon at $12.99 for the paperback and $4.99/$6.99 respectively for the kindle versions. The books can be accessed at Cricket analytics with cricketr  and Beaten by sheer pace-Cricket analytics with yorkr  A must read for any cricket lover! Check it out!!

1

s), and $4.99/Rs 320 and $6.99/Rs448 respectively

Important note: Do check out my other posts using yorkr at yorkr-posts

Important note: Do check out all the posts on the python avatar of yorkr, namely ‘yorkpy’ in my post ‘Pitching yorkpy … short of good length to IPL – Part 1

In this post I look at individual performances of batsmen and bowlers in ODIs. For this post I have chosen Virat Kohli & Mahendra Singh Dhoni from India. Kohli has been consistent and in great form right through. Dhoni follows Kohli very closely in ODIs. Dhoni besides his shrewd captaincy is one of the best ODI batsman and a great finisher. I have include AB Devilliers from South Africa who seems to invent new strokes and shots every time, much like Glenn Maxwell.

For bowling analyses I have selected RA Jadeja, Harbhajan Singh *the top Indian ODI bowlers) and Mitchell Johnson who is among the best in the world.

This post is also available at RPubs at yorkr-3. You can also download this post as a pdf from yorkr-3.pdf

Checkout my interactive Shiny apps GooglyPlus (plots & tables) and Googly (only plots) which can be used to analyze IPL players, teams and matches.

My earlier package ‘cricketr’ (see Introducing cricketr: An R package for analyzing performances of cricketers) was based on data from ESPN Cricinfo Statsguru. If you want to take a look at my book with all my articles based on my package cricketr at – Cricket analytics with cricketr!!!. The book is also available in paperback and kindle versions at Amazon which has, by the way, better formatting!

I have added some quick observations on the plots below. However there is a lot more that can be discerned from the plots that I can possibly explain. The charts do display a wealth of insights. Do take a close look at the plots.

library(dplyr)
library(ggplot2)
library(yorkr)
library(reshape2)
library(gridExtra)
library(rpart.plot)

1. Batting Details

The following functions get the overall batting details for a country against all opposition.

a <- getTeamBattingDetails("India",save=TRUE)
b <- getTeamBattingDetails("South Africa",save=TRUE)

2. Get Batsman details

Now I get the details of the batsmen Virat Kohli and Mahendra Singh Dhoni from the saved India file and AB De Villiers from the saved South Africa file

kohli <- getBatsmanDetails(team="India",name="Kohli")
dhoni <- getBatsmanDetails(team="India",name="Dhoni")
devilliers <-  getBatsmanDetails(team="South Africa",name="Villiers")

3. Display the dataframe

the dataframe obtained from the calls above provide detailed information for the batsman in every ODI match. This dataframe has all the fields that can be obtained from ESPN Cricinfo

Untitled


 

 

Performance analyses of batsmen

 

4. Runs vs deliveries plot

It can be seen from the plots below that Kohli is very consistent in the runs scored. The runs crowd near the regression curve. There is more variance in Dhoni and De Villiers performance. The band on either side of the regression curve represents the 95% confidence interval(A 95% confidence level means that 95% of the intervals would include the population parameter).

p1 <-batsmanRunsVsDeliveries(kohli,"Kohli")
p2 <- batsmanRunsVsDeliveries(dhoni, "Dhoni")
p3 <- batsmanRunsVsDeliveries(devilliers,"De Villiers")
grid.arrange(p1,p2,p3, ncol=3)

runsDel-1

5. Total runs vs 4s vs 6s plot

The plots below show the runs (Total runs, Runs from 4s & Runs from sixes) vs the deliveries faced. Kohli scores more runs and more fours which can be evaluated from the slope of the blue and red regression lines (reaches 150+,50+) for Total runs and Runs from fours). De Villers has more Runs from sixes as can be seen the 3rd sub plot (green line)

kohli46 <- select(kohli,batsman,ballsPlayed,fours,sixes,runs)
p1 <- batsmanFoursSixes(kohli46,"Kohli")
dhoni46 <- select(dhoni,batsman,ballsPlayed,fours,sixes,runs)
p2 <- batsmanFoursSixes(dhoni46,"Dhoni")
devilliers46 <- select(devilliers,batsman,ballsPlayed,fours,sixes,runs)
p3 <- batsmanFoursSixes(devilliers46, "De Villiers")
grid.arrange(p1,p2,p3, ncol=3)

4s6s-1

6. Batsmen dismissals

Interestingly it can be seen that Dhoni has remained unbeaten more often (47 times) than Kohli or De Villiers. Dhoni despite being a great runner between wickets has been run-out more often.

p1 <-batsmanDismissals(kohli,"Kohli")
p2 <- batsmanDismissals(dhoni, "Dhoni")
p3 <- batsmanDismissals(devilliers, "De Villiers")
grid.arrange(p1,p2,p3, ncol=3)

dismissal-1

7. Batsmen Strike Rate

From the plot below Kohli has the best strike rate till 100 runs, the slope seems to steeper. De Villiers seems to do better after 100 runs.

p1 <-batsmanMeanStrikeRate(kohli,"Kohli")
p2 <- batsmanMeanStrikeRate(dhoni, "Dhoni")
p3 <- batsmanMeanStrikeRate(devilliers, "De Villiers")
grid.arrange(p1,p2,p3, ncol=3)

meanSR-1

8. Batsmen moving average

Kohli’s and De Villiers’ form can be seen to be improving over the years. Dhoni seems to have hit a slump in recent times. But we have to keep in mind that he has the second highest ODI runs in India and is just behind Kohli

p1 <-batsmanMovingAverage(kohli,"Kohli")
p2 <- batsmanMovingAverage(dhoni, "Dhoni")
p3 <- batsmanMovingAverage(devilliers, "De Villiers")
grid.arrange(p1,p2,p3, ncol=3)

bmanMA-1

9. Batsmen against opposition

Kohli averages 50 runs against 6 countries, to Dhoni’s 4. Kohli performs well against Australia, New Zealand, West Indies,Pakistan,Bangladesh. Kohli’s performance against England has been mediocre. De Villiers averages around 50 with 5 countries

batsmanRunsAgainstOpposition(kohli,"Kohli")

bmanOppn-1

batsmanRunsAgainstOpposition(dhoni, "Dhoni")

bmanOppn-2

batsmanRunsAgainstOpposition(devilliers, "De Villiers")

bmanOppn-3

10. Batsmen runs at different venues

Kohli’s favorite hunting grounds in ODI are Adelaide, Sydney, Western Australia, Wankhede. Dhoni’s best performances are at Lords, Sydney,Chepauk.

batsmanRunsVenue(kohli,"Kohli")

bmanOppn1-1

batsmanRunsVenue(dhoni, "Dhoni")

bmanOppn1-2

batsmanRunsVenue(devilliers, "De Villiers")

bmanOppn1-3

11. Batsmen runs predict

The plots below predict the number of deliveries needed by each batsmen to score runs shown. For this I have used classification trees based on deliveries and runs using the package rpart. From the plot for Kohli it can be seen that for 58 deliveries scores around 52 runs. On the other hand De Villiers needs just over 40 deliveries to score 52 runs.

par(mfrow=c(1,3))
par(mar=c(4,4,2,2))
batsmanRunsPredict(kohli,"Kohli")
batsmanRunsPredict(dhoni, "Dhoni")
batsmanRunsPredict(devilliers, "De Villiers")

runsPred-1

dev.off()
## null device 
##           1

12. Get team bowling details

The function below all the ODI matches between India or Australia and all other countries

c <- getTeamBowlingDetails("India",save=TRUE)
d <- getTeamBowlingDetails("Australia",save=TRUE)

13. Get wicket details

The functions below gets the data frame for each bowler

jadeja <- getBowlerWicketDetails(team="India",name="Jadeja")
harbhajan <- getBowlerWicketDetails(team="India",name="Harbhajan")
ashwin <- getBowlerWicketDetails(team="India",name="Ashwin")
johnson <-  getBowlerWicketDetails(team="Australia",name="Johnson")

14. Display data frame

The details of the data frame is shown below

knitr::kable(head(jadeja))
bowler overs maidens runs wickets economyRate date opposition venue
RA Jadeja 6 0 40 0 6.67 2009-02-08 Sri Lanka R Premadasa Stadium
RA Jadeja 7 1 34 0 4.86 2009-06-26 West Indies Sabina Park, Kingston
RA Jadeja 2 0 12 0 6.00 2009-06-28 West Indies Sabina Park, Kingston
RA Jadeja 9 0 39 1 4.33 2009-10-25 Australia Reliance Stadium
RA Jadeja 7 1 35 4 5.00 2009-10-28 Australia Vidarbha Cricket Association Stadium, Jamtha
RA Jadeja 7 1 35 4 5.00 2009-10-28 Australia Vidarbha Cricket Association Stadium, Jamtha

15. Bowler Economy rate

Harbhajan and Ashwin have a better economy rate than RA Jadeja

p1 <- bowlerEconomyRate(jadeja,"RA Jadeja")
p2<-bowlerEconomyRate(harbhajan, "Harbhajan")
p3<-bowlerEconomyRate(ashwin, "Ashwin")
p4<-bowlerEconomyRate(johnson, "MG Johnson")
grid.arrange(p1,p2,p3,p4, ncol=2)

ER-1

15. Mean runs conceded by bowler

p1<-bowlerMeanRuns(jadeja,"RA Jadeja")
p2<-bowlerMeanRuns(harbhajan, "Harbhajan")
p3<-bowlerMeanRuns(ashwin, "Ashwin")
p4<-bowlerMeanRuns(johnson, "MG Johnson")
grid.arrange(p1,p2,p3,p4, ncol=2)

meanRuns-1

15. Moving average of bowler

From the plots below MG Johnson, Harbhajan and Ashwin have been performing very consistently. RA Jadeja bowling seems to be taking a nosedive, though he is at the top of all ODI bowlers of India

p1<-bowlerMovingAverage(jadeja,"RA Jadeja")
p2<-bowlerMovingAverage(harbhajan, "Harbhajan")
p3<-bowlerMovingAverage(ashwin, "Ashwin")
p4<-bowlerMovingAverage(johnson, "MG Johnson")
grid.arrange(p1,p2,p3,p4, ncol=2)

bwlrMA-1

16. Wicket average

Jadeja has a better wicket average than Harbhajan and Ashwin.Jadeja and Ashwin average around 2 wickets Harbhajan averages 1.5 wickets(tendency to 2)

p1<-bowlerWicketPlot(jadeja,"RA Jadeja")
p2<-bowlerWicketPlot(harbhajan, "Harbhajan")
p3<-bowlerWicketPlot(ashwin, "Ashwin")
p4<-bowlerWicketPlot(johnson, "MG Johnson")
grid.arrange(p1,p2,p3,p4, ncol=2)

bwlrWkt-1

16. Wickets opposition

Jadeja’s best performances have been against England, Pakistan, New Zealand and Zimbabwe. For Harbhajan it has been New Zealand, Sri Lanka and Zimbabwe.

bowlerWicketsAgainstOpposition(jadeja,"RA Jadeja")

bwlrOppn-1

bowlerWicketsAgainstOpposition(harbhajan, "Harbhajan")

bwlrOppn-2

bowlerWicketsAgainstOpposition(ashwin, "Ashwin")

bwlrOppn-3

bowlerWicketsAgainstOpposition(johnson, "MG Johnson")

bwlrOppn-4

16. Wickets venue

The top 20 venues for each bowler is shown in the plots

bowlerWicketsVenue(jadeja,"RA Jadeja")

bwlrVenue-1

bowlerWicketsVenue(harbhajan, "Harbhajan")

bwlrVenue-2

bowlerWicketsVenue(ashwin, "Ashwin")

bwlrVenue-3

bowlerWicketsVenue(johnson, "MG Johnson")

bwlrVenue-4

16. Create a data frame with wickets and deliveries

jadeja1 <- getDeliveryWickets(team="India",name="Jadeja",save=FALSE)
harbhajan1 <- getDeliveryWickets(team="India",name="Harbhajan",save=FALSE)
ashwin1 <- getDeliveryWickets(team="India",name="Ashwin",save=FALSE)
johnson1 <- getDeliveryWickets(team="Australia",name="MG Johnson",save=FALSE)

17. Deliveries to wickets plots

The following plots try to predict the average number of deliveries required for the wickets taken. As in the batsman runs predict I have used classification trees between deliverie at which a wicket was taken. The package rpart was used for the classification. The internediate nodes are the number of deliveries and the leaf nodes are the wickets taken. Though the wickets are in decimal we can intepret the tree as follows For RA Jadeja 22 to take 1.6 wicket (~2 wickets). Interestingly Harbhajan needs

par(mfrow=c(1,2))
par(mar=c(4,4,2,2))
bowlerWktsPredict(jadeja1,"RA Jadeja")
bowlerWktsPredict(harbhajan1,"Harbhajan Sigh")

wktPrd1-1

dev.off()
## null device 
##           1

Similarly MG Johnson can provide a breakthrough with just around 14 deliveries

par(mfrow=c(1,2))
par(mar=c(4,4,2,2))
bowlerWktsPredict(ashwin1,"Ravichander Ashwin")
bowlerWktsPredict(johnson1,"MG Johnson")

wktPred2-1

dev.off()
## null device 
##           1

Conclusion

ODI batsman

  1. The top 2 ODI Indian batsman(Kohli and Dhoni) and De Villiers of South Africa were considered.
  2. Kohli has a better strike rate till about 100 runs(steeper slope) and De Villiers beyond 100.
  3. Dhoni has remained unbeaten more number of times than the other 2. It may have been possible that his average would have been higher if he had come in earlier
  4. Kohli and De Villiers have performed consistently. Dhoni needs to get back his touch

ODI bowlers

  1. RA Jadeja has a better wicket taking rate than Harbhajan and Ashwin.
  2. Ashwin and Harbhajan have a better economy rate than Jadeja
  3. Harbhjanan, Ashwin and MG Johnson have performed consistently while RA Jadeja’s performance has been on the decline.
  4. Harbhajan and MG Johnson need around 11 balls to make a break through

This was probably the last set of functions for my cricket package yorkr. Over the next several weeks I will be cleaning up, documenting, refining the functions and removing any glitches. I hope to have the package released in the next 6-8 weeks

Also see

  1. Cricket analytics with cricketr
  2. Sixer: An R package cricketr’s new Shiny avatar

You may also like

  1. What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress
  2. The common alphabet of programming languages
  3. A method to crowd source pothole marking on (Indian) roads
  4. The Anomaly
  5. Simulating the domino effect in Android using Box2D and AndEngine
  6. Presentation on Wireless Technologies – Part 1
  7. Natural selection of database technology through the years