# My book ‘Practical Machine Learning in R and Python: Second edition’ on Amazon

Note: The 3rd edition of this book is now available My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon

The third edition of my book ‘Practical Machine Learning with R and Python – Machine Learning in stereo’ is now available in both paperback ($12.99) and kindle ($9.99/Rs449) versions.  This second edition includes more content,  extensive comments and formatting for better readability.

In this book I implement some of the most common, but important Machine Learning algorithms in R and equivalent Python code.
1. Practical machine with R and Python: Third Edition – Machine Learning in Stereo(Paperback-$12.99) 2. Practical machine with R and Third Edition – Machine Learning in Stereo(Kindle-$9.99/Rs449)

This book is ideal both for beginners and the experts in R and/or Python. Those starting their journey into datascience and ML will find the first 3 chapters useful, as they touch upon the most important programming constructs in R and Python and also deal with equivalent statements in R and Python. Those who are expert in either of the languages, R or Python, will find the equivalent code ideal for brushing up on the other language. And finally,those who are proficient in both languages, can use the R and Python implementations to internalize the ML algorithms better.

Here is a look at the topics covered

Preface …………………………………………………………………………….4
Introduction ………………………………………………………………………6
1. Essential R ………………………………………………………………… 8
2. Essential Python for Datascience ……………………………………………57
3. R vs Python …………………………………………………………………81
4. Regression of a continuous variable ……………………………………….101
5. Classification and Cross Validation ………………………………………..121
6. Regression techniques and regularization ………………………………….146
7. SVMs, Decision Trees and Validation curves ………………………………191
8. Splines, GAMs, Random Forests and Boosting ……………………………222
9. PCA, K-Means and Hierarchical Clustering ………………………………258
References ……………………………………………………………………..269

Hope you have a great time learning as I did while implementing these algorithms!

# Practical Machine Learning with R and Python – Part 5

This is the 5th and probably penultimate part of my series on ‘Practical Machine Learning with R and Python’. The earlier parts of this series included

1. Practical Machine Learning with R and Python – Part 1 In this initial post, I touch upon univariate, multivariate, polynomial regression and KNN regression in R and Python
2.Practical Machine Learning with R and Python – Part 2 In this post, I discuss Logistic Regression, KNN classification and cross validation error for both LOOCV and K-Fold in both R and Python
3.Practical Machine Learning with R and Python – Part 3 This post covered ‘feature selection’ in Machine Learning. Specifically I touch best fit, forward fit, backward fit, ridge(L2 regularization) & lasso (L1 regularization). The post includes equivalent code in R and Python.
4.Practical Machine Learning with R and Python – Part 4 In this part I discussed SVMs, Decision Trees, validation, precision recall, and roc curves

This post ‘Practical Machine Learning with R and Python – Part 5’ discusses regression with B-splines, natural splines, smoothing splines, generalized additive models (GAMS), bagging, random forest and boosting

As with my previous posts in this series, this post is largely based on the following 2 MOOC courses

1. Statistical Learning, Prof Trevor Hastie & Prof Robert Tibesherani, Online Stanford
2. Applied Machine Learning in Python Prof Kevyn-Collin Thomson, University Of Michigan, Coursera

You can download this R Markdown file and associated data files from Github at MachineLearning-RandPython-Part5

1. Machine Learning in plain English-Part 1
2. Machine Learning in plain English-Part 2
3. Machine Learning in plain English-Part 3

Check out my compact and minimal book  “Practical Machine Learning with R and Python:Third edition- Machine Learning in stereo”  available in Amazon in paperback($12.99) and kindle($8.99) versions. My book includes implementations of key ML algorithms and associated measures and metrics. The book is ideal for anybody who is familiar with the concepts and would like a quick reference to the different ML algorithms that can be applied to problems and how to select the best model. Pick your copy today!!

For this part I have used the data sets from UCI Machine Learning repository(Communities and Crime and Auto MPG)

## 1. Splines

When performing regression (continuous or logistic) between a target variable and a feature (or a set of features), a single polynomial for the entire range of the data set usually does not perform a good fit.Rather we would need to provide we could fit
regression curves for different section of the data set.

There are several techniques which do this for e.g. piecewise-constant functions, piecewise-linear functions, piecewise-quadratic/cubic/4th order polynomial functions etc. One such set of functions are the cubic splines which fit cubic polynomials to successive sections of the dataset. The points where the cubic splines join, are called ‘knots’.

Since each section has a different cubic spline, there could be discontinuities (or breaks) at these knots. To prevent these discontinuities ‘natural splines’ and ‘smoothing splines’ ensure that the seperate cubic functions have 2nd order continuity at these knots with the adjacent splines. 2nd order continuity implies that the value, 1st order derivative and 2nd order derivative at these knots are equal.

A cubic spline with knots $\alpha_{k}$ , k=1,2,3,..K is a piece-wise cubic polynomial with continuous derivative up to order 2 at each knot. We can write $y_{i} = \beta_{0} +\beta_{1}b_{1}(x_{i}) +\beta_{2}b_{2}(x_{i}) + .. + \beta_{K+3}b_{K+3}(x_{i}) + \epsilon_{i}$.
For each ($x{i},y{i}$), $b_{i}$ are called ‘basis’ functions, where  $b_{1}(x_{i})=x_{i}$$b_{2}(x_{i})=x_{i}^2$, $b_{3}(x_{i})=x_{i}^3$, $b_{k+3}(x_{i})=(x_{i} -\alpha_{k})^3$ where k=1,2,3… K The 1st and 2nd derivatives of cubic splines are continuous at the knots. Hence splines provide a smooth continuous fit to the data by fitting different splines to different sections of the data

## 1.1a Fit a 4th degree polynomial – R code

In the code below a non-linear function (a 4th order polynomial) is used to fit the data. Usually when we fit a single polynomial to the entire data set the tails of the fit tend to vary a lot particularly if there are fewer points at the ends. Splines help in reducing this variation at the extremities

library(dplyr)
library(ggplot2)
source('RFunctions-1.R')
df=read.csv("auto_mpg.csv",stringsAsFactors = FALSE) # Data from UCI
df1 <- as.data.frame(sapply(df,as.numeric))
#Select specific columns
df2 <- df1 %>% dplyr::select(cylinder,displacement, horsepower,weight, acceleration, year,mpg)
auto <- df2[complete.cases(df2),]
# Fit a 4th degree polynomial
fit=lm(mpg~poly(horsepower,4),data=auto)
#Display a summary of fit
summary(fit)
##
## Call:
## lm(formula = mpg ~ poly(horsepower, 4), data = auto)
##
## Residuals:
##      Min       1Q   Median       3Q      Max
## -14.8820  -2.5802  -0.1682   2.2100  16.1434
##
## Coefficients:
##                       Estimate Std. Error t value Pr(>|t|)
## (Intercept)            23.4459     0.2209 106.161   <2e-16 ***
## poly(horsepower, 4)1 -120.1377     4.3727 -27.475   <2e-16 ***
## poly(horsepower, 4)2   44.0895     4.3727  10.083   <2e-16 ***
## poly(horsepower, 4)3   -3.9488     4.3727  -0.903    0.367
## poly(horsepower, 4)4   -5.1878     4.3727  -1.186    0.236
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 4.373 on 387 degrees of freedom
## Multiple R-squared:  0.6893, Adjusted R-squared:  0.6861
## F-statistic: 214.7 on 4 and 387 DF,  p-value: < 2.2e-16
#Get the range of horsepower
hp <- range(auto$horsepower) #Create a sequence to be used for plotting hpGrid <- seq(hp[1],hp[2],by=10) #Predict for these values of horsepower. Set Standard error as TRUE pred=predict(fit,newdata=list(horsepower=hpGrid),se=TRUE) #Compute bands on either side that is 2xSE seBands=cbind(pred$fit+2*pred$se.fit,pred$fit-2*pred$se.fit) #Plot the fit with Standard Error bands plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower", ylab="MPG", main="Polynomial of degree 4") lines(hpGrid,pred$fit,lwd=2,col="blue")
matlines(hpGrid,seBands,lwd=2,col="blue",lty=3)

## 1.1b Fit a 4th degree polynomial – Python code

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
# Select columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
# Convert all columns to numeric
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')

#Drop NAs
autoDF3=autoDF2.dropna()
autoDF3.shape
X=autoDF3[['horsepower']]
y=autoDF3['mpg']
#Create a polynomial of degree 4
poly = PolynomialFeatures(degree=4)
X_poly = poly.fit_transform(X)

# Fit a polynomial regression line
linreg = LinearRegression().fit(X_poly, y)
# Create a range of values
hpGrid = np.arange(np.min(X),np.max(X),10)
hp=hpGrid.reshape(-1,1)
# Transform to 4th degree
poly = PolynomialFeatures(degree=4)
hp_poly = poly.fit_transform(hp)

#Create a scatter plot
plt.scatter(X,y)
# Fit the prediction
ypred=linreg.predict(hp_poly)
plt.title("Poylnomial of degree 4")
fig2=plt.xlabel("Horsepower")
fig2=plt.ylabel("MPG")
# Draw the regression curve
plt.plot(hp,ypred,c="red")
plt.savefig('fig1.png', bbox_inches='tight')

## 1.1c Fit a B-Spline – R Code

In the code below a B- Spline is fit to data. The B-spline requires the manual selection of knots

#Splines
library(splines)
# Fit a B-spline to the data. Select knots at 60,75,100,150
fit=lm(mpg~bs(horsepower,df=6,knots=c(60,75,100,150)),data=auto)
# Use the fitted regresion to predict
pred=predict(fit,newdata=list(horsepower=hpGrid),se=T)
# Create a scatter plot
plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower",
ylab="MPG", main="B-Spline with 4 knots")
#Draw lines with 2 Standard Errors on either side
lines(hpGrid,pred$fit,lwd=2) lines(hpGrid,pred$fit+2*pred$se,lty="dashed") lines(hpGrid,pred$fit-2*pred$se,lty="dashed") abline(v=c(60,75,100,150),lty=2,col="darkgreen") ## 1.1d Fit a Natural Spline – R Code Here a ‘Natural Spline’ is used to fit .The Natural Spline extrapolates beyond the boundary knots and the ends of the function are much more constrained than a regular spline or a global polynomoial where the ends can wag a lot more. Natural splines do not require the explicit selection of knots # There is no need to select the knots here. There is a smoothing parameter which # can be specified by the degrees of freedom 'df' parameter. The natural spline fit2=lm(mpg~ns(horsepower,df=4),data=auto) pred=predict(fit2,newdata=list(horsepower=hpGrid),se=T) plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower", ylab="MPG", main="Natural Splines") lines(hpGrid,pred$fit,lwd=2)
lines(hpGrid,pred$fit+2*pred$se,lty="dashed")
lines(hpGrid,pred$fit-2*pred$se,lty="dashed")

## 1.1.e Fit a Smoothing Spline – R code

Here a smoothing spline is used. Smoothing splines also do not require the explicit setting of knots. We can change the ‘degrees of freedom(df)’ paramater to get the best fit

# Smoothing spline has a smoothing parameter, the degrees of freedom
# This is too wiggly
plot(auto$horsepower,auto$mpg,xlim=hp,cex=.5,col="black",xlab="Horsepower",
ylab="MPG", main="Smoothing Splines")

# Here df is set to 16. This has a lot of variance
fit=smooth.spline(auto$horsepower,auto$mpg,df=16)
lines(fit,col="red",lwd=2)

# We can use Cross Validation to allow the spline to pick the value of this smpopothing paramter. We do not need to set the degrees of freedom 'df'
fit=smooth.spline(auto$horsepower,auto$mpg,cv=TRUE)
lines(fit,col="blue",lwd=2)

## 1.1e Splines – Python

There isn’t as much treatment of splines in Python and SKLearn. I did find the LSQUnivariate, UnivariateSpline spline. The LSQUnivariate spline requires the explcit setting of knots

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from scipy.interpolate import LSQUnivariateSpline
autoDF.shape
autoDF.columns
autoDF1=autoDF[['mpg','cylinder','displacement','horsepower','weight','acceleration','year']]
autoDF2 = autoDF1.apply(pd.to_numeric, errors='coerce')
auto=autoDF2.dropna()
auto=auto[['horsepower','mpg']].sort_values('horsepower')

# Set the knots manually
knots=[65,75,100,150]
# Create an array for X & y
X=np.array(auto['horsepower'])
y=np.array(auto['mpg'])
# Fit a LSQunivariate spline
s = LSQUnivariateSpline(X,y,knots)

#Plot the spline
xs = np.linspace(40,230,1000)
ys = s(xs)
plt.scatter(X, y)
plt.plot(xs, ys)
plt.savefig('fig2.png', bbox_inches='tight')


## 1.2 Generalized Additiive models (GAMs)

Generalized Additive Models (GAMs) is a really powerful ML tool.

$y_{i} = \beta_{0} + f_{1}(x_{i1}) + f_{2}(x_{i2}) + .. +f_{p}(x_{ip}) + \epsilon_{i}$

In GAMs we use a different functions for each of the variables. GAMs give a much better fit since we can choose any function for the different sections

## 1.2a Generalized Additive Models (GAMs) – R Code

The plot below show the smooth spline that is fit for each of the features horsepower, cylinder, displacement, year and acceleration. We can use any function for example loess, 4rd order polynomial etc.

library(gam)
# Fit a smoothing spline for horsepower, cyliner, displacement and acceleration
gam=gam(mpg~s(horsepower,4)+s(cylinder,5)+s(displacement,4)+s(year,4)+s(acceleration,5),data=auto)
# Display the summary of the fit. This give the significance of each of the paramwetr
# Also an ANOVA is given for each combination of the features
summary(gam)
##
## Call: gam(formula = mpg ~ s(horsepower, 4) + s(cylinder, 5) + s(displacement,
##     4) + s(year, 4) + s(acceleration, 5), data = auto)
## Deviance Residuals:
##     Min      1Q  Median      3Q     Max
## -8.3190 -1.4436 -0.0261  1.2279 12.0873
##
## (Dispersion Parameter for gaussian family taken to be 6.9943)
##
##     Null Deviance: 23818.99 on 391 degrees of freedom
## Residual Deviance: 2587.881 on 370 degrees of freedom
## AIC: 1898.282
##
## Number of Local Scoring Iterations: 3
##
## Anova for Parametric Effects
##                     Df  Sum Sq Mean Sq  F value    Pr(>F)
## s(horsepower, 4)     1 15632.8 15632.8 2235.085 < 2.2e-16 ***
## s(cylinder, 5)       1   508.2   508.2   72.666 3.958e-16 ***
## s(displacement, 4)   1   374.3   374.3   53.514 1.606e-12 ***
## s(year, 4)           1  2263.2  2263.2  323.583 < 2.2e-16 ***
## s(acceleration, 5)   1   372.4   372.4   53.246 1.809e-12 ***
## Residuals          370  2587.9     7.0
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Anova for Nonparametric Effects
##                    Npar Df Npar F     Pr(F)
## (Intercept)
## s(horsepower, 4)         3 13.825 1.453e-08 ***
## s(cylinder, 5)           3 17.668 9.712e-11 ***
## s(displacement, 4)       3 44.573 < 2.2e-16 ***
## s(year, 4)               3 23.364 7.183e-14 ***
## s(acceleration, 5)       4  3.848  0.004453 **
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
par(mfrow=c(2,3))
plot(gam,se=TRUE)

## 1.2b Generalized Additive Models (GAMs) – Python Code

I did not find the equivalent of GAMs in SKlearn in Python. There was an early prototype (2012) in Github. Looks like it is still work in progress or has probably been abandoned.

## 1.3 Tree based Machine Learning Models

Tree based Machine Learning are all based on the ‘bootstrapping’ technique. In bootstrapping given a sample of size N, we create datasets of size N by sampling this original dataset with replacement. Machine Learning models are built on the different bootstrapped samples and then averaged.

Decision Trees as seen above have the tendency to overfit. There are several techniques that help to avoid this namely a) Bagging b) Random Forests c) Boosting

### Bagging, Random Forest and Gradient Boosting

Bagging: Bagging, or Bootstrap Aggregation decreases the variance of predictions, by creating separate Decisiion Tree based ML models on the different samples and then averaging these ML models

Random Forests: Bagging is a greedy algorithm and tries to produce splits based on all variables which try to minimize the error. However the different ML models have a high correlation. Random Forests remove this shortcoming, by using a variable and random set of features to split on. Hence the features chosen and the resulting trees are uncorrelated. When these ML models are averaged the performance is much better.

Boosting: Gradient Boosted Decision Trees also use an ensemble of trees but they don’t build Machine Learning models with random set of features at each step. Rather small and simple trees are built. Successive trees try to minimize the error from the earlier trees.

Out of Bag (OOB) Error: In Random Forest and Gradient Boosting for each bootstrap sample taken from the dataset, there will be samples left out. These are known as Out of Bag samples.Classification accuracy carried out on these OOB samples is known as OOB error

## 1.31a Decision Trees – R Code

The code below creates a Decision tree with the cancer training data. The summary of the fit is output. Based on the ML model, the predict function is used on test data and a confusion matrix is output.

# Read the cancer data
library(tree)
library(caret)
library(e1071)
cancer <- cancer[,2:32]
cancer$target <- as.factor(cancer$target)
train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]

# Create Decision Tree
cancerStatus=tree(target~.,train)
summary(cancerStatus)
##
## Classification tree:
## tree(formula = target ~ ., data = train)
## Variables actually used in tree construction:
## [1] "worst.perimeter"      "worst.concave.points" "area.error"
## [4] "worst.texture"        "mean.texture"         "mean.concave.points"
## Number of terminal nodes:  9
## Residual mean deviance:  0.1218 = 50.8 / 417
## Misclassification error rate: 0.02347 = 10 / 426
pred <- predict(cancerStatus,newdata=test,type="class")
confusionMatrix(pred,test$target) ## Confusion Matrix and Statistics ## ## Reference ## Prediction 0 1 ## 0 49 7 ## 1 8 78 ## ## Accuracy : 0.8944 ## 95% CI : (0.8318, 0.9397) ## No Information Rate : 0.5986 ## P-Value [Acc > NIR] : 4.641e-15 ## ## Kappa : 0.7795 ## Mcnemar's Test P-Value : 1 ## ## Sensitivity : 0.8596 ## Specificity : 0.9176 ## Pos Pred Value : 0.8750 ## Neg Pred Value : 0.9070 ## Prevalence : 0.4014 ## Detection Rate : 0.3451 ## Detection Prevalence : 0.3944 ## Balanced Accuracy : 0.8886 ## ## 'Positive' Class : 0 ##  # Plot decision tree with labels plot(cancerStatus) text(cancerStatus,pretty=0) ## 1.31b Decision Trees – Cross Validation – R Code We can also perform a Cross Validation on the data to identify the Decision Tree which will give the minimum deviance. library(tree) cancer <- read.csv("cancer.csv",stringsAsFactors = FALSE) cancer <- cancer[,2:32] cancer$target <- as.factor(cancer$target) train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5) train <- cancer[train_idx, ] test <- cancer[-train_idx, ] # Create Decision Tree cancerStatus=tree(target~.,train) # Execute 10 fold cross validation cvCancer=cv.tree(cancerStatus) plot(cvCancer) # Plot the plot(cvCancer$size,cvCancer$dev,type='b') prunedCancer=prune.tree(cancerStatus,best=4) plot(prunedCancer) text(prunedCancer,pretty=0) pred <- predict(prunedCancer,newdata=test,type="class") confusionMatrix(pred,test$target)
## Confusion Matrix and Statistics
##
##           Reference
## Prediction  0  1
##          0 50  7
##          1  7 78
##
##                Accuracy : 0.9014
##                  95% CI : (0.8401, 0.945)
##     No Information Rate : 0.5986
##     P-Value [Acc > NIR] : 7.988e-16
##
##                   Kappa : 0.7948
##  Mcnemar's Test P-Value : 1
##
##             Sensitivity : 0.8772
##             Specificity : 0.9176
##          Pos Pred Value : 0.8772
##          Neg Pred Value : 0.9176
##              Prevalence : 0.4014
##          Detection Rate : 0.3521
##    Detection Prevalence : 0.4014
##       Balanced Accuracy : 0.8974
##
##        'Positive' Class : 0
## 

## 1.31c Decision Trees – Python Code

Below is the Python code for creating Decision Trees. The accuracy, precision, recall and F1 score is computed on the test data set.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_classification, make_blobs
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
import graphviz

(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)

X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 0)
clf = DecisionTreeClassifier().fit(X_train, y_train)

print('Accuracy of Decision Tree classifier on training set: {:.2f}'
.format(clf.score(X_train, y_train)))
print('Accuracy of Decision Tree classifier on test set: {:.2f}'
.format(clf.score(X_test, y_test)))

y_predicted=clf.predict(X_test)
confusion = confusion_matrix(y_test, y_predicted)
print('Accuracy: {:.2f}'.format(accuracy_score(y_test, y_predicted)))
print('Precision: {:.2f}'.format(precision_score(y_test, y_predicted)))
print('Recall: {:.2f}'.format(recall_score(y_test, y_predicted)))
print('F1: {:.2f}'.format(f1_score(y_test, y_predicted)))

# Plot the Decision Tree
clf = DecisionTreeClassifier(max_depth=2).fit(X_train, y_train)
dot_data = tree.export_graphviz(clf, out_file=None,
feature_names=cancer.feature_names,
class_names=cancer.target_names,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
## Accuracy of Decision Tree classifier on training set: 1.00
## Accuracy of Decision Tree classifier on test set: 0.87
## Accuracy: 0.87
## Precision: 0.97
## Recall: 0.82
## F1: 0.89

## 1.31d Decision Trees – Cross Validation – Python Code

In the code below 5-fold cross validation is performed for different depths of the tree and the accuracy is computed. The accuracy on the test set seems to plateau when the depth is 8. But it is seen to increase again from 10 to 12. More analysis needs to be done here


import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
from sklearn.cross_validation import train_test_split, KFold
def computeCVAccuracy(X,y,folds):
accuracy=[]
foldAcc=[]
depth=[1,2,3,4,5,6,7,8,9,10,11,12]
nK=len(X)/float(folds)
xval_err=0
for i in depth:
kf = KFold(len(X),n_folds=folds)
for train_index, test_index in kf:
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
clf = DecisionTreeClassifier(max_depth = i).fit(X_train, y_train)
score=clf.score(X_test, y_test)
accuracy.append(score)

foldAcc.append(np.mean(accuracy))

return(foldAcc)

cvAccuracy=computeCVAccuracy(pd.DataFrame(X_cancer),pd.DataFrame(y_cancer),folds=10)

df1=pd.DataFrame(cvAccuracy)
df1.columns=['cvAccuracy']
df=df1.reindex([1,2,3,4,5,6,7,8,9,10,11,12])
df.plot()
plt.title("Decision Tree - 10-fold Cross Validation Accuracy vs Depth of tree")
plt.xlabel("Depth of tree")
plt.ylabel("Accuracy")
plt.savefig('fig3.png', bbox_inches='tight')

## 1.4a Random Forest – R code

A Random Forest is fit using the Boston data. The summary shows that 4 variables were randomly chosen at each split and the resulting ML model explains 88.72% of the test data. Also the variable importance is plotted. It can be seen that ‘rooms’ and ‘status’ are the most influential features in the model

library(randomForest)
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL

# Select specific columns
Boston <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",                          "distances","highways","tax","teacherRatio","color",
"status","medianValue")

# Fit a Random Forest on the Boston training data
rfBoston=randomForest(medianValue~.,data=Boston)
# Display the summatu of the fit. It can be seen that the MSE is 10.88
# and the percentage variance explained is 86.14%. About 4 variables were tried at each # #split for a maximum tree of 500.
# The MSE and percent variance is on Out of Bag trees
rfBoston
##
## Call:
##  randomForest(formula = medianValue ~ ., data = Boston)
##                Type of random forest: regression
##                      Number of trees: 500
## No. of variables tried at each split: 4
##
##           Mean of squared residuals: 9.521672
##                     % Var explained: 88.72
#List and plot the variable importances
importance(rfBoston)
##              IncNodePurity
## crimeRate        2602.1550
## zone              258.8057
## indus            2599.6635
## charles           240.2879
## nox              2748.8485
## rooms           12011.6178
## age              1083.3242
## distances        2432.8962
## highways          393.5599
## tax              1348.6987
## teacherRatio     2841.5151
## color             731.4387
## status          12735.4046
varImpPlot(rfBoston)

## 1.4b Random Forest-OOB and Cross Validation Error – R code

The figure below shows the OOB error and the Cross Validation error vs the ‘mtry’. Here mtry indicates the number of random features that are chosen at each split. The lowest test error occurs when mtry = 8

library(randomForest)
df=read.csv("Boston.csv",stringsAsFactors = FALSE) # Data from MASS - SL

# Select specific columns
Boston <- df %>% dplyr::select("crimeRate","zone","indus","charles","nox","rooms","age",                          "distances","highways","tax","teacherRatio","color",
"status","medianValue")
# Split as training and tst sets
train_idx <- trainTestSplit(Boston,trainPercent=75,seed=5)
train <- Boston[train_idx, ]
test <- Boston[-train_idx, ]

#Initialize OOD and testError
oobError <- NULL
testError <- NULL
# In the code below the number of variables to consider at each split is increased
# from 1 - 13(max features) and the OOB error and the MSE is computed
for(i in 1:13){
fitRF=randomForest(medianValue~.,data=train,mtry=i,ntree=400)
oobError[i] <-fitRF$mse[400] pred <- predict(fitRF,newdata=test) testError[i] <- mean((pred-test$medianValue)^2)
}

# We can see the OOB and Test Error. It can be seen that the Random Forest performs
# best with the lowers MSE at mtry=6
matplot(1:13,cbind(testError,oobError),pch=19,col=c("red","blue"),
type="b",xlab="mtry(no of varaibles at each split)", ylab="Mean Squared Error",
main="Random Forest - OOB and Test Error")
legend("topright",legend=c("OOB","Test"),pch=19,col=c("red","blue"))

## 1.4c Random Forest – Python code

The python code for Random Forest Regression is shown below. The training and test score is computed. The variable importance shows that ‘rooms’ and ‘status’ are the most influential of the variables

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor

X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
'teacherRatio','color','status']]
y=df['medianValue']

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)

regr = RandomForestRegressor(max_depth=4, random_state=0)
regr.fit(X_train, y_train)

print('R-squared score (training): {:.3f}'
.format(regr.score(X_train, y_train)))
print('R-squared score (test): {:.3f}'
.format(regr.score(X_test, y_test)))

feature_names=['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
'teacherRatio','color','status']
print(regr.feature_importances_)
plt.figure(figsize=(10,6),dpi=80)
c_features=X_train.shape[1]
plt.barh(np.arange(c_features),regr.feature_importances_)
plt.xlabel("Feature importance")
plt.ylabel("Feature name")

plt.yticks(np.arange(c_features), feature_names)
plt.tight_layout()

plt.savefig('fig4.png', bbox_inches='tight')

## R-squared score (training): 0.917
## R-squared score (test): 0.734
## [ 0.03437382  0.          0.00580335  0.          0.00731004  0.36461548
##   0.00638577  0.03432173  0.0041244   0.01732328  0.01074148  0.0012638
##   0.51373683]

## 1.4d Random Forest – Cross Validation and OOB Error – Python code

As with R the ‘max_features’ determines the random number of features the random forest will use at each split. The plot shows that when max_features=8 the MSE is lowest

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score

X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
'teacherRatio','color','status']]
y=df['medianValue']

cvError=[]
oobError=[]
oobMSE=[]
for i in range(1,13):
regr = RandomForestRegressor(max_depth=4, n_estimators=400,max_features=i,oob_score=True,random_state=0)
mse= np.mean(cross_val_score(regr, X, y, cv=5,scoring = 'neg_mean_squared_error'))
# Since this is neg_mean_squared_error I have inverted the sign to get MSE
cvError.append(-mse)
# Fit on all data to compute OOB error
regr.fit(X, y)
# Record the OOB error for each max_features=i setting
oob = 1 - regr.oob_score_
oobError.append(oob)
# Get the Out of Bag prediction
oobPred=regr.oob_prediction_
# Compute the Mean Squared Error between OOB Prediction and target
mseOOB=np.mean(np.square(oobPred-y))
oobMSE.append(mseOOB)

# Plot the CV Error and OOB Error
# Set max_features
maxFeatures=np.arange(1,13)
cvError=pd.DataFrame(cvError,index=maxFeatures)
oobMSE=pd.DataFrame(oobMSE,index=maxFeatures)
#Plot
fig8=df.plot()
fig8=plt.title('Random forest - CV Error and OOB Error vs max_features')
fig8.figure.savefig('fig8.png', bbox_inches='tight')

#Plot the OOB Error vs max_features
plt.plot(range(1,13),oobError)
fig2=plt.title("Random Forest - OOB Error vs max_features (variable no of features)")
fig2=plt.xlabel("max_features (variable no of features)")
fig2=plt.ylabel("OOB Error")
fig2.figure.savefig('fig7.png', bbox_inches='tight')


## 1.5a Boosting – R code

Here a Gradient Boosted ML Model is built with a n.trees=5000, with a learning rate of 0.01 and depth of 4. The feature importance plot also shows that rooms and status are the 2 most important features. The MSE vs the number of trees plateaus around 2000 trees

library(gbm)
# Perform gradient boosting on the Boston data set. The distribution is gaussian since we
# doing MSE. The interaction depth specifies the number of splits
boostBoston=gbm(medianValue~.,data=train,distribution="gaussian",n.trees=5000,
shrinkage=0.01,interaction.depth=4)
#The summary gives the variable importance. The 2 most significant variables are
# number of rooms and lower status
summary(boostBoston)

##                       var    rel.inf
## rooms               rooms 42.2267200
## status             status 27.3024671
## distances       distances  7.9447972
## crimeRate       crimeRate  5.0238827
## nox                   nox  4.0616548
## teacherRatio teacherRatio  3.1991999
## age                   age  2.7909772
## color               color  2.3436295
## tax                   tax  2.1386213
## charles           charles  1.3799109
## highways         highways  0.7644026
## indus               indus  0.7236082
## zone                 zone  0.1001287
# The plots below show how each variable relates to the median value of the home. As
# the number of roomd increase the median value increases and with increase in lower status
# the median value decreases
par(mfrow=c(1,2))
#Plot the relation between the top 2 features and the target
plot(boostBoston,i="rooms")
plot(boostBoston,i="status")

# Create a sequence of trees between 100-5000 incremented by 50
nTrees=seq(100,5000,by=50)
# Predict the values for the test data
pred <- predict(boostBoston,newdata=test,n.trees=nTrees)
# Compute the mean for each of the MSE for each of the number of trees
boostError <- apply((pred-test$medianValue)^2,2,mean) #Plot the MSE vs the number of trees plot(nTrees,boostError,pch=19,col="blue",ylab="Mean Squared Error", main="Boosting Test Error") ## 1.5b Cross Validation Boosting – R code Included below is a cross validation error vs the learning rate. The lowest error is when learning rate = 0.09 cvError <- NULL s <- c(.001,0.01,0.03,0.05,0.07,0.09,0.1) for(i in seq_along(s)){ cvBoost=gbm(medianValue~.,data=train,distribution="gaussian",n.trees=5000, shrinkage=s[i],interaction.depth=4,cv.folds=5) cvError[i] <- mean(cvBoost$cv.error)
}

# Create a data frame for plotting
a <- rbind(s,cvError)
b <- as.data.frame(t(a))
# It can be seen that a shrinkage parameter of 0,05 gives the lowes CV Error
ggplot(b,aes(s,cvError)) + geom_point() + geom_line(color="blue") +
xlab("Shrinkage") + ylab("Cross Validation Error") +
ggtitle("Gradient boosted trees - Cross Validation error vs Shrinkage")

## 1.5c Boosting – Python code

A gradient boost ML model in Python is created below. The Rsquared score is computed on the training and test data.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split

X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
'teacherRatio','color','status']]
y=df['medianValue']

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)

regr.fit(X_train, y_train)

print('R-squared score (training): {:.3f}'
.format(regr.score(X_train, y_train)))
print('R-squared score (test): {:.3f}'
.format(regr.score(X_test, y_test)))
## R-squared score (training): 0.983
## R-squared score (test): 0.821

## 1.5c Cross Validation Boosting – Python code

the cross validation error is computed as the learning rate is varied. The minimum CV eror occurs when lr = 0.04

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import cross_val_score

X=df[['crimeRate','zone', 'indus','charles','nox','rooms', 'age','distances','highways','tax',
'teacherRatio','color','status']]
y=df['medianValue']

cvError=[]
learning_rate =[.001,0.01,0.03,0.05,0.07,0.09,0.1]
for lr in learning_rate:
mse= np.mean(cross_val_score(regr, X, y, cv=10,scoring = 'neg_mean_squared_error'))
# Since this is neg_mean_squared_error I have inverted the sign to get MSE
cvError.append(-mse)
learning_rate =[.001,0.01,0.03,0.05,0.07,0.09,0.1]
plt.plot(learning_rate,cvError)
plt.title("Gradient Boosting - 5-fold CV- Mean Squared Error vs max_features (variable no of features)")
plt.xlabel("max_features (variable no of features)")
plt.ylabel("Mean Squared Error")
plt.savefig('fig6.png', bbox_inches='tight')

Conclusion This post covered Splines and Tree based ML models like Bagging, Random Forest and Boosting. Stay tuned for further updates.

You may also like

To see all posts see Index of posts

# Practical Machine Learning with R and Python – Part 4

This is the 4th installment of my ‘Practical Machine Learning with R and Python’ series. In this part I discuss classification with Support Vector Machines (SVMs), using both a Linear and a Radial basis kernel, and Decision Trees. Further, a closer look is taken at some of the metrics associated with binary classification, namely accuracy vs precision and recall. I also touch upon Validation curves, Precision-Recall, ROC curves and AUC with equivalent code in R and Python

This post is a continuation of my 3 earlier posts on Practical Machine Learning in R and Python
1. Practical Machine Learning with R and Python – Part 1
2. Practical Machine Learning with R and Python – Part 2
3. Practical Machine Learning with R and Python – Part 3

The RMarkdown file with the code and the associated data files can be downloaded from Github at MachineLearning-RandPython-Part4

1. Machine Learning in plain English-Part 1
2. Machine Learning in plain English-Part 2
3. Machine Learning in plain English-Part 3

Check out my compact and minimal book  “Practical Machine Learning with R and Python:Third edition- Machine Learning in stereo”  available in Amazon in paperback($12.99) and kindle($8.99) versions. My book includes implementations of key ML algorithms and associated measures and metrics. The book is ideal for anybody who is familiar with the concepts and would like a quick reference to the different ML algorithms that can be applied to problems and how to select the best model. Pick your copy today!!

Support Vector Machines (SVM) are another useful Machine Learning model that can be used for both regression and classification problems. SVMs used in classification, compute the hyperplane, that separates the 2 classes with the maximum margin. To do this the features may be transformed into a larger multi-dimensional feature space. SVMs can be used with different kernels namely linear, polynomial or radial basis to determine the best fitting model for a given classification problem.

In the 2nd part of this series Practical Machine Learning with R and Python – Part 2, I had mentioned the various metrics that are used in classification ML problems namely Accuracy, Precision, Recall and F1 score. Accuracy gives the fraction of data that were correctly classified as belonging to the +ve or -ve class. However ‘accuracy’ in itself is not a good enough measure because it does not take into account the fraction of the data that were incorrectly classified. This issue becomes even more critical in different domains. For e.g a surgeon who would like to detect cancer, would like to err on the side of caution, and classify even a possibly non-cancerous patient as possibly having cancer, rather than mis-classifying a malignancy as benign. Here we would like to increase recall or sensitivity which is  given by Recall= TP/(TP+FN) or we try reduce mis-classification by either increasing the (true positives) TP or reducing (false negatives) FN

On the other hand, search algorithms would like to increase precision which tries to reduce the number of irrelevant results in the search result. Precision= TP/(TP+FP). In other words we do not want ‘false positives’ or irrelevant results to come in the search results and there is a need to reduce the false positives.

When we try to increase ‘precision’, we do so at the cost of ‘recall’, and vice-versa. I found this diagram and explanation in Wikipedia very useful Source: Wikipedia

“Consider a brain surgeon tasked with removing a cancerous tumor from a patient’s brain. The surgeon needs to remove all of the tumor cells since any remaining cancer cells will regenerate the tumor. Conversely, the surgeon must not remove healthy brain cells since that would leave the patient with impaired brain function. The surgeon may be more liberal in the area of the brain she removes to ensure she has extracted all the cancer cells. This decision increases recall but reduces precision. On the other hand, the surgeon may be more conservative in the brain she removes to ensure she extracts only cancer cells. This decision increases precision but reduces recall. That is to say, greater recall increases the chances of removing healthy cells (negative outcome) and increases the chances of removing all cancer cells (positive outcome). Greater precision decreases the chances of removing healthy cells (positive outcome) but also decreases the chances of removing all cancer cells (negative outcome).”

## 1.1a. Linear SVM – R code

In R code below I use SVM with linear kernel

source('RFunctions-1.R')
library(dplyr)
library(e1071)
library(caret)
library(reshape2)
library(ggplot2)
# Read data. Data from SKLearn
cancer$target <- as.factor(cancer$target)

# Split into training and test sets
train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]

# Fit a linear basis kernel. DO not scale the data
svmfit=svm(target~., data=train, kernel="linear",scale=FALSE)
ypred=predict(svmfit,test)
#Print a confusion matrix
confusionMatrix(ypred,test$target) ## Confusion Matrix and Statistics ## ## Reference ## Prediction 0 1 ## 0 54 3 ## 1 3 82 ## ## Accuracy : 0.9577 ## 95% CI : (0.9103, 0.9843) ## No Information Rate : 0.5986 ## P-Value [Acc > NIR] : <2e-16 ## ## Kappa : 0.9121 ## Mcnemar's Test P-Value : 1 ## ## Sensitivity : 0.9474 ## Specificity : 0.9647 ## Pos Pred Value : 0.9474 ## Neg Pred Value : 0.9647 ## Prevalence : 0.4014 ## Detection Rate : 0.3803 ## Detection Prevalence : 0.4014 ## Balanced Accuracy : 0.9560 ## ## 'Positive' Class : 0 ##  ## 1.1b Linear SVM – Python code The code below creates a SVM with linear basis in Python and also dumps the corresponding classification metrics import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.model_selection import train_test_split from sklearn.svm import LinearSVC from sklearn.datasets import make_classification, make_blobs from sklearn.metrics import confusion_matrix from matplotlib.colors import ListedColormap from sklearn.datasets import load_breast_cancer # Load the cancer data (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) clf = LinearSVC().fit(X_train, y_train) print('Breast cancer dataset') print('Accuracy of Linear SVC classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Linear SVC classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) ## Breast cancer dataset ## Accuracy of Linear SVC classifier on training set: 0.92 ## Accuracy of Linear SVC classifier on test set: 0.94 ## 1.2 Dummy classifier Often when we perform classification tasks using any ML model namely logistic regression, SVM, neural networks etc. it is very useful to determine how well the ML model performs agains at dummy classifier. A dummy classifier uses some simple computation like frequency of majority class, instead of fitting and ML model. It is essential that our ML model does much better that the dummy classifier. This problem is even more important in imbalanced classes where we have only about 10% of +ve samples. If any ML model we create has a accuracy of about 0.90 then it is evident that our classifier is not doing any better than a dummy classsfier which can just take a majority count of this imbalanced class and also come up with 0.90. We need to be able to do better than that. In the examples below (1.3a & 1.3b) it can be seen that SVMs with ‘radial basis’ kernel with unnormalized data, for both R and Python, do not perform any better than the dummy classifier. ## 1.2a Dummy classifier – R code R does not seem to have an explicit dummy classifier. I created a simple dummy classifier that predicts the majority class. SKlearn in Python also includes other strategies like uniform, stratified etc. but this should be possible to create in R also. # Create a simple dummy classifier that computes the ratio of the majority class to the totla DummyClassifierAccuracy <- function(train,test,type="majority"){ if(type=="majority"){ count <- sum(train$target==1)/dim(train)[1]
}
count
}

cancer$target <- as.factor(cancer$target)

# Create training and test sets
train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]

#Dummy classifier majority class
acc=DummyClassifierAccuracy(train,test)
sprintf("Accuracy is %f",acc)
## [1] "Accuracy is 0.638498"

## 1.2b Dummy classifier – Python code

This dummy classifier uses the majority class.

import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.dummy import DummyClassifier
from sklearn.metrics import confusion_matrix
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer,
random_state = 0)

# Negative class (0) is most frequent
dummy_majority = DummyClassifier(strategy = 'most_frequent').fit(X_train, y_train)
y_dummy_predictions = dummy_majority.predict(X_test)

print('Dummy classifier accuracy on test set: {:.2f}'
.format(dummy_majority.score(X_test, y_test)))

## Dummy classifier accuracy on test set: 0.63

## 1.3a – Radial SVM (un-normalized) – R code

SVMs perform better when the data is normalized or scaled. The 2 examples below show that SVM with radial basis kernel does not perform any better than the dummy classifier

library(dplyr)
library(e1071)
library(caret)
library(reshape2)
library(ggplot2)

train_idx <- trainTestSplit(cancer,trainPercent=75,seed=5)
train <- cancer[train_idx, ]
test <- cancer[-train_idx, ]
# Unnormalized data
ypred=predict(svmfit,test)
confusionMatrix(ypred,test$target) ## Confusion Matrix and Statistics ## ## Reference ## Prediction 0 1 ## 0 0 0 ## 1 57 85 ## ## Accuracy : 0.5986 ## 95% CI : (0.5131, 0.6799) ## No Information Rate : 0.5986 ## P-Value [Acc > NIR] : 0.5363 ## ## Kappa : 0 ## Mcnemar's Test P-Value : 1.195e-13 ## ## Sensitivity : 0.0000 ## Specificity : 1.0000 ## Pos Pred Value : NaN ## Neg Pred Value : 0.5986 ## Prevalence : 0.4014 ## Detection Rate : 0.0000 ## Detection Prevalence : 0.0000 ## Balanced Accuracy : 0.5000 ## ## 'Positive' Class : 0 ##  ## 1.4b – Radial SVM (un-normalized) – Python code import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.svm import SVC # Load the cancer data (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) clf = SVC(C=10).fit(X_train, y_train) print('Breast cancer dataset (unnormalized features)') print('Accuracy of RBF-kernel SVC on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of RBF-kernel SVC on test set: {:.2f}' .format(clf.score(X_test, y_test))) ## Breast cancer dataset (unnormalized features) ## Accuracy of RBF-kernel SVC on training set: 1.00 ## Accuracy of RBF-kernel SVC on test set: 0.63 ## 1.5a – Radial SVM (Normalized) -R Code The data is scaled (normalized ) before using the SVM model. The SVM model has 2 paramaters a) C – Large C (less regularization), more regularization b) gamma – Small gamma has larger decision boundary with more misclassfication, and larger gamma has tighter decision boundary The R code below computes the accuracy as the regularization paramater is changed trainingAccuracy <- NULL testAccuracy <- NULL C1 <- c(.01,.1, 1, 10, 20) for(i in C1){ svmfit=svm(target~., data=train, kernel="radial",cost=i,scale=TRUE) ypredTrain <-predict(svmfit,train) ypredTest=predict(svmfit,test) a <-confusionMatrix(ypredTrain,train$target)
b <-confusionMatrix(ypredTest,test$target) trainingAccuracy <-c(trainingAccuracy,a$overall[1])
testAccuracy <-c(testAccuracy,b$overall[1]) } print(trainingAccuracy) ## Accuracy Accuracy Accuracy Accuracy Accuracy ## 0.6384977 0.9671362 0.9906103 0.9976526 1.0000000 print(testAccuracy) ## Accuracy Accuracy Accuracy Accuracy Accuracy ## 0.5985915 0.9507042 0.9647887 0.9507042 0.9507042 a <-rbind(C1,as.numeric(trainingAccuracy),as.numeric(testAccuracy)) b <- data.frame(t(a)) names(b) <- c("C1","trainingAccuracy","testAccuracy") df <- melt(b,id="C1") ggplot(df) + geom_line(aes(x=C1, y=value, colour=variable),size=2) + xlab("C (SVC regularization)value") + ylab("Accuracy") + ggtitle("Training and test accuracy vs C(regularization)") ## 1.5b – Radial SVM (normalized) – Python The Radial basis kernel is used on normalized data for a range of ‘C’ values and the result is plotted. import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.svm import SVC from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() # Load the cancer data (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) print('Breast cancer dataset (normalized with MinMax scaling)') trainingAccuracy=[] testAccuracy=[] for C1 in [.01,.1, 1, 10, 20]: clf = SVC(C=C1).fit(X_train_scaled, y_train) acctrain=clf.score(X_train_scaled, y_train) accTest=clf.score(X_test_scaled, y_test) trainingAccuracy.append(acctrain) testAccuracy.append(accTest) # Create a dataframe C1=[.01,.1, 1, 10, 20] trainingAccuracy=pd.DataFrame(trainingAccuracy,index=C1) testAccuracy=pd.DataFrame(testAccuracy,index=C1) # Plot training and test R squared as a function of alpha df=pd.concat([trainingAccuracy,testAccuracy],axis=1) df.columns=['trainingAccuracy','trainingAccuracy'] fig1=df.plot() fig1=plt.title('Training and test accuracy vs C (SVC)') fig1.figure.savefig('fig1.png', bbox_inches='tight') ## Breast cancer dataset (normalized with MinMax scaling) Output image: ## 1.6a Validation curve – R code Sklearn includes code creating validation curves by varying paramaters and computing and plotting accuracy as gamma or C or changd. I did not find this R but I think this is a useful function and so I have created the R equivalent of this. # The R equivalent of np.logspace seqLogSpace <- function(start,stop,len){ a=seq(log10(10^start),log10(10^stop),length=len) 10^a } # Read the data. This is taken the SKlearn cancer data cancer <- read.csv("cancer.csv") cancer$target <- as.factor(cancer$target) set.seed(6) # Create the range of C1 in log space param_range = seqLogSpace(-3,2,20) # Initialize the overall training and test accuracy to NULL overallTrainAccuracy <- NULL overallTestAccuracy <- NULL # Loop over the parameter range of Gamma for(i in param_range){ # Set no of folds noFolds=5 # Create the rows which fall into different folds from 1..noFolds folds = sample(1:noFolds, nrow(cancer), replace=TRUE) # Initialize the training and test accuracy of folds to 0 trainingAccuracy <- 0 testAccuracy <- 0 # Loop through the folds for(j in 1:noFolds){ # The training is all rows for which the row is != j (k-1 folds -> training) train <- cancer[folds!=j,] # The rows which have j as the index become the test set test <- cancer[folds==j,] # Create a SVM model for this svmfit=svm(target~., data=train, kernel="radial",gamma=i,scale=TRUE) # Add up all the fold accuracy for training and test separately ypredTrain <-predict(svmfit,train) ypredTest=predict(svmfit,test) # Create confusion matrix a <-confusionMatrix(ypredTrain,train$target)
b <-confusionMatrix(ypredTest,test$target) # Get the accuracy trainingAccuracy <-trainingAccuracy + a$overall[1]
testAccuracy <-testAccuracy+b$overall[1] } # Compute the average of accuracy for K folds for number of features 'i' overallTrainAccuracy=c(overallTrainAccuracy,trainingAccuracy/noFolds) overallTestAccuracy=c(overallTestAccuracy,testAccuracy/noFolds) } #Create a dataframe a <- rbind(param_range,as.numeric(overallTrainAccuracy), as.numeric(overallTestAccuracy)) b <- data.frame(t(a)) names(b) <- c("C1","trainingAccuracy","testAccuracy") df <- melt(b,id="C1") #Plot in log axis ggplot(df) + geom_line(aes(x=C1, y=value, colour=variable),size=2) + xlab("C (SVC regularization)value") + ylab("Accuracy") + ggtitle("Training and test accuracy vs C(regularization)") + scale_x_log10() ## 1.6b Validation curve – Python Compute and plot the validation curve as gamma is varied. import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.preprocessing import MinMaxScaler from sklearn.svm import SVC from sklearn.model_selection import validation_curve # Load the cancer data (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) scaler = MinMaxScaler() X_scaled = scaler.fit_transform(X_cancer) # Create a gamma values from 10^-3 to 10^2 with 20 equally spaced intervals param_range = np.logspace(-3, 2, 20) # Compute the validation curve train_scores, test_scores = validation_curve(SVC(), X_scaled, y_cancer, param_name='gamma', param_range=param_range, cv=10) #Plot the figure fig2=plt.figure() #Compute the mean train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) fig2=plt.title('Validation Curve with SVM') fig2=plt.xlabel('$\gamma$(gamma)') fig2=plt.ylabel('Score') fig2=plt.ylim(0.0, 1.1) lw = 2 fig2=plt.semilogx(param_range, train_scores_mean, label='Training score', color='darkorange', lw=lw) fig2=plt.fill_between(param_range, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.2, color='darkorange', lw=lw) fig2=plt.semilogx(param_range, test_scores_mean, label='Cross-validation score', color='navy', lw=lw) fig2=plt.fill_between(param_range, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.2, color='navy', lw=lw) fig2.figure.savefig('fig2.png', bbox_inches='tight')  Output image: ## 1.7a Validation Curve (Preventing data leakage) – Python code In this course Applied Machine Learning in Python, the Professor states that when we apply the same data transformation to a entire dataset, it will cause a data leakage. “The proper way to do cross-validation when you need to scale the data is not to scale the entire dataset with a single transform, since this will indirectly leak information into the training data about the whole dataset, including the test data (see the lecture on data leakage later in the course). Instead, scaling/normalizing must be computed and applied for each cross-validation fold separately” So I apply separate scaling to the training and testing folds and plot. In the lecture the Prof states that this can be done using pipelines. import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.datasets import load_breast_cancer from sklearn.cross_validation import KFold from sklearn.preprocessing import MinMaxScaler from sklearn.svm import SVC # Read the data (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) # Set the parameter range param_range = np.logspace(-3, 2, 20) # Set number of folds folds=5 #Initialize overallTrainAccuracy=[] overallTestAccuracy=[] # Loop over the paramater range for c in param_range: trainingAccuracy=0 testAccuracy=0 kf = KFold(len(X_cancer),n_folds=folds) # Partition into training and test folds for train_index, test_index in kf: # Partition the data acccording the fold indices generated X_train, X_test = X_cancer[train_index], X_cancer[test_index] y_train, y_test = y_cancer[train_index], y_cancer[test_index] # Scale the X_train and X_test scaler = MinMaxScaler() X_train_scaled = scaler.fit_transform(X_train) X_test_scaled = scaler.transform(X_test) # Fit a SVC model for each C clf = SVC(C=c).fit(X_train_scaled, y_train) #Compute the training and test score acctrain=clf.score(X_train_scaled, y_train) accTest=clf.score(X_test_scaled, y_test) trainingAccuracy += np.sum(acctrain) testAccuracy += np.sum(accTest) # Compute the mean training and testing accuracy overallTrainAccuracy.append(trainingAccuracy/folds) overallTestAccuracy.append(testAccuracy/folds) overallTrainAccuracy=pd.DataFrame(overallTrainAccuracy,index=param_range) overallTestAccuracy=pd.DataFrame(overallTestAccuracy,index=param_range) # Plot training and test R squared as a function of alpha df=pd.concat([overallTrainAccuracy,overallTestAccuracy],axis=1) df.columns=['trainingAccuracy','testAccuracy'] fig3=plt.title('Validation Curve with SVM') fig3=plt.xlabel('$\gamma$(gamma)') fig3=plt.ylabel('Score') fig3=plt.ylim(0.5, 1.1) lw = 2 fig3=plt.semilogx(param_range, overallTrainAccuracy, label='Training score', color='darkorange', lw=lw) fig3=plt.semilogx(param_range, overallTestAccuracy, label='Cross-validation score', color='navy', lw=lw) fig3=plt.legend(loc='best') fig3.figure.savefig('fig3.png', bbox_inches='tight')  Output image: ## 1.8 a Decision trees – R code Decision trees in R can be plotted using RPart package library(rpart) library(rpart.plot) rpart = NULL # Create a decision tree m <-rpart(Species~.,data=iris) #Plot rpart.plot(m,extra=2,main="Decision Tree - IRIS") ## 1.8 b Decision trees – Python code from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier from sklearn import tree from sklearn.model_selection import train_test_split import graphviz iris = load_iris() X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state = 3) clf = DecisionTreeClassifier().fit(X_train, y_train) print('Accuracy of Decision Tree classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of Decision Tree classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) dot_data = tree.export_graphviz(clf, out_file=None, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, special_characters=True) graph = graphviz.Source(dot_data) graph ## Accuracy of Decision Tree classifier on training set: 1.00 ## Accuracy of Decision Tree classifier on test set: 0.97 ## 1.9a Feature importance – R code I found the following code which had a snippet for feature importance. Sklean has a nice method for this. For some reason the results in R and Python are different. Any thoughts? set.seed(3) # load the library library(mlbench) library(caret) # load the dataset cancer <- read.csv("cancer.csv") cancer$target <- as.factor(cancer$target) # Split as data data <- cancer[,1:31] target <- cancer[,32] # Train the model model <- train(data, target, method="rf", preProcess="scale", trControl=trainControl(method = "cv")) # Compute variable importance importance <- varImp(model) # summarize importance print(importance) # plot importance plot(importance) ## 1.9b Feature importance – Python code import numpy as np import pandas as pd import os import matplotlib.pyplot as plt from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import train_test_split from sklearn.datasets import load_breast_cancer import numpy as np # Read the data cancer= load_breast_cancer() (X_cancer, y_cancer) = load_breast_cancer(return_X_y = True) X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0) # Use the DecisionTreClassifier clf = DecisionTreeClassifier(max_depth = 4, min_samples_leaf = 8, random_state = 0).fit(X_train, y_train) c_features=len(cancer.feature_names) print('Breast cancer dataset: decision tree') print('Accuracy of DT classifier on training set: {:.2f}' .format(clf.score(X_train, y_train))) print('Accuracy of DT classifier on test set: {:.2f}' .format(clf.score(X_test, y_test))) # Plot the feature importances fig4=plt.figure(figsize=(10,6),dpi=80) fig4=plt.barh(range(c_features), clf.feature_importances_) fig4=plt.xlabel("Feature importance") fig4=plt.ylabel("Feature name") fig4=plt.yticks(np.arange(c_features), cancer.feature_names) fig4=plt.tight_layout() plt.savefig('fig4.png', bbox_inches='tight')  ## Breast cancer dataset: decision tree ## Accuracy of DT classifier on training set: 0.96 ## Accuracy of DT classifier on test set: 0.94 Output image: ## 1.10a Precision-Recall, ROC curves & AUC- R code I tried several R packages for plotting the Precision and Recall and AUC curve. PRROC seems to work well. The Precision-Recall curves show the tradeoff between precision and recall. The higher the precision, the lower the recall and vice versa.AUC curves that hug the top left corner indicate a high sensitivity,specificity and an excellent accuracy. source("RFunctions-1.R") library(dplyr) library(caret) library(e1071) library(PRROC) # Read the data (this data is from sklearn!) d <- read.csv("digits.csv") digits <- d[2:66] digits$X64 <- as.factor(digits$X64) # Split as training and test sets train_idx <- trainTestSplit(digits,trainPercent=75,seed=5) train <- digits[train_idx, ] test <- digits[-train_idx, ] # Fit a SVM model with linear basis kernel with probabilities svmfit=svm(X64~., data=train, kernel="linear",scale=FALSE,probability=TRUE) ypred=predict(svmfit,test,probability=TRUE) head(attr(ypred,"probabilities")) ## 0 1 ## 6 7.395947e-01 2.604053e-01 ## 8 9.999998e-01 1.842555e-07 ## 12 1.655178e-05 9.999834e-01 ## 13 9.649997e-01 3.500032e-02 ## 15 9.994849e-01 5.150612e-04 ## 16 9.999987e-01 1.280700e-06 # Store the probability of 0s and 1s m0<-attr(ypred,"probabilities")[,1] m1<-attr(ypred,"probabilities")[,2] # Create a dataframe of scores scores <- data.frame(m1,test$X64)

# Class 0 is data points of +ve class (in this case, digit 1) and -ve class (digit 0)
#Compute Precision Recall
pr <- pr.curve(scores.class0=scores[scores$test.X64=="1",]$m1,
scores.class1=scores[scores$test.X64=="0",]$m1,
curve=T)

# Plot precision-recall curve
plot(pr)

#Plot the ROC curve
roc<-roc.curve(m0, m1,curve=TRUE)
plot(roc)

## 1.10b Precision-Recall, ROC curves & AUC- Python code

For Python Logistic Regression is used to plot Precision Recall, ROC curve and compute AUC

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import roc_curve, auc
X, y = dataset.data, dataset.target
#Create 2 classes -i) Digit 1 (from digit 1) ii) Digit 0 (from all other digits)
# Make a copy of the target
z= y.copy()
# Replace all non 1's as 0
z[z != 1] = 0

X_train, X_test, y_train, y_test = train_test_split(X, z, random_state=0)
# Fit a LR model
lr = LogisticRegression().fit(X_train, y_train)

#Compute the decision scores
y_scores_lr = lr.fit(X_train, y_train).decision_function(X_test)
y_score_list = list(zip(y_test[0:20], y_scores_lr[0:20]))

#Show the decision_function scores for first 20 instances
y_score_list

precision, recall, thresholds = precision_recall_curve(y_test, y_scores_lr)
closest_zero = np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]
#Plot
plt.figure()
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.plot(precision, recall, label='Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.axes().set_aspect('equal')
plt.savefig('fig5.png', bbox_inches='tight')

#Compute and plot the ROC
y_score_lr = lr.fit(X_train, y_train).decision_function(X_test)
fpr_lr, tpr_lr, _ = roc_curve(y_test, y_score_lr)
roc_auc_lr = auc(fpr_lr, tpr_lr)

plt.figure()
plt.xlim([-0.01, 1.00])
plt.ylim([-0.01, 1.01])
plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16)
plt.legend(loc='lower right', fontsize=13)
plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')
plt.axes()
plt.savefig('fig6.png', bbox_inches='tight')


output

## 1.10c Precision-Recall, ROC curves & AUC- Python code

In the code below classification probabilities are used to compute and plot precision-recall, roc and AUC

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
from sklearn.calibration import CalibratedClassifierCV

X, y = dataset.data, dataset.target
# Make a copy of the target
z= y.copy()
# Replace all non 1's as 0
z[z != 1] = 0

X_train, X_test, y_train, y_test = train_test_split(X, z, random_state=0)
svm = LinearSVC()
# Need to use CalibratedClassifierSVC to redict probabilities for lInearSVC
clf = CalibratedClassifierCV(svm)
clf.fit(X_train, y_train)
y_proba_lr = clf.predict_proba(X_test)
from sklearn.metrics import precision_recall_curve

precision, recall, thresholds = precision_recall_curve(y_test, y_proba_lr[:,1])
closest_zero = np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]
#plt.figure(figsize=(15,15),dpi=80)
plt.figure()
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.plot(precision, recall, label='Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.axes().set_aspect('equal')
plt.savefig('fig7.png', bbox_inches='tight')

# The making of cricket package yorkr – Part 3

## Introduction

This is the 3rd part of my cricket package yorkr in R. In my 2 earlier posts

1. The making of cricket package yorkr – Part 1. This post analyzed the performance of team in a ODI match. The batting and bowling performances of the team were analyzed. This post also performed analyses of a country in all matches against another country for e.g. India vs All matches agianst Australia. The best performers with the bat and ball were determined, the best batting partnerships, the performances at different venues etc. The detailed performances of the bowlers of India and Australia in the confrontation were also analyzed.
2. The making of cricket package yorkr – Part 2 This post includes all ODI matches between a country and others. For obvious reasons I have chosen India and selected all ODI matches played by India with other countries. This included batting and bowling performances of the country against all oppositions.

As mentioned in my earlier posts the data is taken from Cricsheet

If you are passionate about cricket, and love analyzing cricket performances, then check out my 2 racy books on cricket! In my books, I perform detailed yet compact analysis of performances of both batsmen, bowlers besides evaluating team & match performances in Tests , ODIs, T20s & IPL. You can buy my books on cricket from Amazon at $12.99 for the paperback and$4.99/$6.99 respectively for the kindle versions. The books can be accessed at Cricket analytics with cricketr and Beaten by sheer pace-Cricket analytics with yorkr A must read for any cricket lover! Check it out!! s), and$4.99/Rs 320 and \$6.99/Rs448 respectively

Important note: Do check out my other posts using yorkr at yorkr-posts

Important note: Do check out all the posts on the python avatar of yorkr, namely ‘yorkpy’ in my post ‘Pitching yorkpy … short of good length to IPL – Part 1

In this post I look at individual performances of batsmen and bowlers in ODIs. For this post I have chosen Virat Kohli & Mahendra Singh Dhoni from India. Kohli has been consistent and in great form right through. Dhoni follows Kohli very closely in ODIs. Dhoni besides his shrewd captaincy is one of the best ODI batsman and a great finisher. I have include AB Devilliers from South Africa who seems to invent new strokes and shots every time, much like Glenn Maxwell.

For bowling analyses I have selected RA Jadeja, Harbhajan Singh *the top Indian ODI bowlers) and Mitchell Johnson who is among the best in the world.

This post is also available at RPubs at yorkr-3. You can also download this post as a pdf from yorkr-3.pdf

Checkout my interactive Shiny apps GooglyPlus (plots & tables) and Googly (only plots) which can be used to analyze IPL players, teams and matches.

My earlier package ‘cricketr’ (see Introducing cricketr: An R package for analyzing performances of cricketers) was based on data from ESPN Cricinfo Statsguru. If you want to take a look at my book with all my articles based on my package cricketr at – Cricket analytics with cricketr!!!. The book is also available in paperback and kindle versions at Amazon which has, by the way, better formatting!

I have added some quick observations on the plots below. However there is a lot more that can be discerned from the plots that I can possibly explain. The charts do display a wealth of insights. Do take a close look at the plots.

library(dplyr)
library(ggplot2)
library(yorkr)
library(reshape2)
library(gridExtra)
library(rpart.plot)

### 1. Batting Details

The following functions get the overall batting details for a country against all opposition.

a <- getTeamBattingDetails("India",save=TRUE)
b <- getTeamBattingDetails("South Africa",save=TRUE)

### 2. Get Batsman details

Now I get the details of the batsmen Virat Kohli and Mahendra Singh Dhoni from the saved India file and AB De Villiers from the saved South Africa file

kohli <- getBatsmanDetails(team="India",name="Kohli")
dhoni <- getBatsmanDetails(team="India",name="Dhoni")
devilliers <-  getBatsmanDetails(team="South Africa",name="Villiers")

### 3. Display the dataframe

the dataframe obtained from the calls above provide detailed information for the batsman in every ODI match. This dataframe has all the fields that can be obtained from ESPN Cricinfo

## Performance analyses of batsmen

### 4. Runs vs deliveries plot

It can be seen from the plots below that Kohli is very consistent in the runs scored. The runs crowd near the regression curve. There is more variance in Dhoni and De Villiers performance. The band on either side of the regression curve represents the 95% confidence interval(A 95% confidence level means that 95% of the intervals would include the population parameter).

p1 <-batsmanRunsVsDeliveries(kohli,"Kohli")
p2 <- batsmanRunsVsDeliveries(dhoni, "Dhoni")
p3 <- batsmanRunsVsDeliveries(devilliers,"De Villiers")
grid.arrange(p1,p2,p3, ncol=3)

### 5. Total runs vs 4s vs 6s plot

The plots below show the runs (Total runs, Runs from 4s & Runs from sixes) vs the deliveries faced. Kohli scores more runs and more fours which can be evaluated from the slope of the blue and red regression lines (reaches 150+,50+) for Total runs and Runs from fours). De Villers has more Runs from sixes as can be seen the 3rd sub plot (green line)

kohli46 <- select(kohli,batsman,ballsPlayed,fours,sixes,runs)
dhoni46 <- select(dhoni,batsman,ballsPlayed,fours,sixes,runs)
devilliers46 <- select(devilliers,batsman,ballsPlayed,fours,sixes,runs)
grid.arrange(p1,p2,p3, ncol=3)

### 6. Batsmen dismissals

Interestingly it can be seen that Dhoni has remained unbeaten more often (47 times) than Kohli or De Villiers. Dhoni despite being a great runner between wickets has been run-out more often.

p1 <-batsmanDismissals(kohli,"Kohli")
p2 <- batsmanDismissals(dhoni, "Dhoni")
p3 <- batsmanDismissals(devilliers, "De Villiers")
grid.arrange(p1,p2,p3, ncol=3)

### 7. Batsmen Strike Rate

From the plot below Kohli has the best strike rate till 100 runs, the slope seems to steeper. De Villiers seems to do better after 100 runs.

p1 <-batsmanMeanStrikeRate(kohli,"Kohli")
p2 <- batsmanMeanStrikeRate(dhoni, "Dhoni")
p3 <- batsmanMeanStrikeRate(devilliers, "De Villiers")
grid.arrange(p1,p2,p3, ncol=3)

### 8. Batsmen moving average

Kohli’s and De Villiers’ form can be seen to be improving over the years. Dhoni seems to have hit a slump in recent times. But we have to keep in mind that he has the second highest ODI runs in India and is just behind Kohli

p1 <-batsmanMovingAverage(kohli,"Kohli")
p2 <- batsmanMovingAverage(dhoni, "Dhoni")
p3 <- batsmanMovingAverage(devilliers, "De Villiers")
grid.arrange(p1,p2,p3, ncol=3)

### 9. Batsmen against opposition

Kohli averages 50 runs against 6 countries, to Dhoni’s 4. Kohli performs well against Australia, New Zealand, West Indies,Pakistan,Bangladesh. Kohli’s performance against England has been mediocre. De Villiers averages around 50 with 5 countries

batsmanRunsAgainstOpposition(kohli,"Kohli")

batsmanRunsAgainstOpposition(dhoni, "Dhoni")

batsmanRunsAgainstOpposition(devilliers, "De Villiers")

### 10. Batsmen runs at different venues

Kohli’s favorite hunting grounds in ODI are Adelaide, Sydney, Western Australia, Wankhede. Dhoni’s best performances are at Lords, Sydney,Chepauk.

batsmanRunsVenue(kohli,"Kohli")

batsmanRunsVenue(dhoni, "Dhoni")

batsmanRunsVenue(devilliers, "De Villiers")

### 11. Batsmen runs predict

The plots below predict the number of deliveries needed by each batsmen to score runs shown. For this I have used classification trees based on deliveries and runs using the package rpart. From the plot for Kohli it can be seen that for 58 deliveries scores around 52 runs. On the other hand De Villiers needs just over 40 deliveries to score 52 runs.

par(mfrow=c(1,3))
par(mar=c(4,4,2,2))
batsmanRunsPredict(kohli,"Kohli")
batsmanRunsPredict(dhoni, "Dhoni")
batsmanRunsPredict(devilliers, "De Villiers")

dev.off()
## null device
##           1

### 12. Get team bowling details

The function below all the ODI matches between India or Australia and all other countries

c <- getTeamBowlingDetails("India",save=TRUE)
d <- getTeamBowlingDetails("Australia",save=TRUE)

### 13. Get wicket details

The functions below gets the data frame for each bowler

jadeja <- getBowlerWicketDetails(team="India",name="Jadeja")
harbhajan <- getBowlerWicketDetails(team="India",name="Harbhajan")
ashwin <- getBowlerWicketDetails(team="India",name="Ashwin")
johnson <-  getBowlerWicketDetails(team="Australia",name="Johnson")

### 14. Display data frame

The details of the data frame is shown below

knitr::kable(head(jadeja))
bowler overs maidens runs wickets economyRate date opposition venue
RA Jadeja 7 1 34 0 4.86 2009-06-26 West Indies Sabina Park, Kingston
RA Jadeja 2 0 12 0 6.00 2009-06-28 West Indies Sabina Park, Kingston
RA Jadeja 7 1 35 4 5.00 2009-10-28 Australia Vidarbha Cricket Association Stadium, Jamtha
RA Jadeja 7 1 35 4 5.00 2009-10-28 Australia Vidarbha Cricket Association Stadium, Jamtha

### 15. Bowler Economy rate

Harbhajan and Ashwin have a better economy rate than RA Jadeja

p1 <- bowlerEconomyRate(jadeja,"RA Jadeja")
p2<-bowlerEconomyRate(harbhajan, "Harbhajan")
p3<-bowlerEconomyRate(ashwin, "Ashwin")
p4<-bowlerEconomyRate(johnson, "MG Johnson")
grid.arrange(p1,p2,p3,p4, ncol=2)

### 15. Mean runs conceded by bowler

p1<-bowlerMeanRuns(jadeja,"RA Jadeja")
p2<-bowlerMeanRuns(harbhajan, "Harbhajan")
p3<-bowlerMeanRuns(ashwin, "Ashwin")
p4<-bowlerMeanRuns(johnson, "MG Johnson")
grid.arrange(p1,p2,p3,p4, ncol=2)

### 15. Moving average of bowler

From the plots below MG Johnson, Harbhajan and Ashwin have been performing very consistently. RA Jadeja bowling seems to be taking a nosedive, though he is at the top of all ODI bowlers of India

p1<-bowlerMovingAverage(jadeja,"RA Jadeja")
p2<-bowlerMovingAverage(harbhajan, "Harbhajan")
p3<-bowlerMovingAverage(ashwin, "Ashwin")
p4<-bowlerMovingAverage(johnson, "MG Johnson")
grid.arrange(p1,p2,p3,p4, ncol=2)

### 16. Wicket average

Jadeja has a better wicket average than Harbhajan and Ashwin.Jadeja and Ashwin average around 2 wickets Harbhajan averages 1.5 wickets(tendency to 2)

p1<-bowlerWicketPlot(jadeja,"RA Jadeja")
p2<-bowlerWicketPlot(harbhajan, "Harbhajan")
p3<-bowlerWicketPlot(ashwin, "Ashwin")
p4<-bowlerWicketPlot(johnson, "MG Johnson")
grid.arrange(p1,p2,p3,p4, ncol=2)

### 16. Wickets opposition

Jadeja’s best performances have been against England, Pakistan, New Zealand and Zimbabwe. For Harbhajan it has been New Zealand, Sri Lanka and Zimbabwe.

bowlerWicketsAgainstOpposition(jadeja,"RA Jadeja")

bowlerWicketsAgainstOpposition(harbhajan, "Harbhajan")

bowlerWicketsAgainstOpposition(ashwin, "Ashwin")

bowlerWicketsAgainstOpposition(johnson, "MG Johnson")

### 16. Wickets venue

The top 20 venues for each bowler is shown in the plots

bowlerWicketsVenue(jadeja,"RA Jadeja")

bowlerWicketsVenue(harbhajan, "Harbhajan")

bowlerWicketsVenue(ashwin, "Ashwin")

bowlerWicketsVenue(johnson, "MG Johnson")

### 16. Create a data frame with wickets and deliveries

jadeja1 <- getDeliveryWickets(team="India",name="Jadeja",save=FALSE)
harbhajan1 <- getDeliveryWickets(team="India",name="Harbhajan",save=FALSE)
ashwin1 <- getDeliveryWickets(team="India",name="Ashwin",save=FALSE)
johnson1 <- getDeliveryWickets(team="Australia",name="MG Johnson",save=FALSE)

### 17. Deliveries to wickets plots

The following plots try to predict the average number of deliveries required for the wickets taken. As in the batsman runs predict I have used classification trees between deliverie at which a wicket was taken. The package rpart was used for the classification. The internediate nodes are the number of deliveries and the leaf nodes are the wickets taken. Though the wickets are in decimal we can intepret the tree as follows For RA Jadeja 22 to take 1.6 wicket (~2 wickets). Interestingly Harbhajan needs

par(mfrow=c(1,2))
par(mar=c(4,4,2,2))
bowlerWktsPredict(harbhajan1,"Harbhajan Sigh")

dev.off()
## null device
##           1

Similarly MG Johnson can provide a breakthrough with just around 14 deliveries

par(mfrow=c(1,2))
par(mar=c(4,4,2,2))
bowlerWktsPredict(ashwin1,"Ravichander Ashwin")
bowlerWktsPredict(johnson1,"MG Johnson")

dev.off()
## null device
##           1

## Conclusion

### ODI batsman

1. The top 2 ODI Indian batsman(Kohli and Dhoni) and De Villiers of South Africa were considered.
2. Kohli has a better strike rate till about 100 runs(steeper slope) and De Villiers beyond 100.
3. Dhoni has remained unbeaten more number of times than the other 2. It may have been possible that his average would have been higher if he had come in earlier
4. Kohli and De Villiers have performed consistently. Dhoni needs to get back his touch

### ODI bowlers

1. RA Jadeja has a better wicket taking rate than Harbhajan and Ashwin.
2. Ashwin and Harbhajan have a better economy rate than Jadeja
3. Harbhjanan, Ashwin and MG Johnson have performed consistently while RA Jadeja’s performance has been on the decline.
4. Harbhajan and MG Johnson need around 11 balls to make a break through

This was probably the last set of functions for my cricket package yorkr. Over the next several weeks I will be cleaning up, documenting, refining the functions and removing any glitches. I hope to have the package released in the next 6-8 weeks

Also see

You may also like