Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR

“There’s something so paradoxical about pi. On the one hand, it represents order, as embodied by the shape of a circle, long held to be a symbol of perfection and eternity. On the other hand, pi is unruly, disheveled in appearance, its digits obeying no obvious rule, or at least none that we can perceive. Pi is elusive and mysterious, forever beyond reach. Its mix of order and disorder is what makes it so bewitching. ” 

From  Infinite Powers by Steven Strogatz

Anybody who wants to be “anybody” in Big Data must necessarily be able to work on both large structured and unstructured data.  Log analysis is critical in any enterprise which is usually unstructured. As I mentioned in my previous post Big Data: On RDDs, Dataframes,Hive QL with Pyspark and SparkR-Part 3 RDDs are typically used to handle unstructured data. Spark has the Dataframe abstraction over RDDs which performs better as it is optimized with the Catalyst optimization engine. Nevertheless, it is important to be able to process with RDDs.  This post is a continuation of my 3 earlier posts on Big Data namely

1. Big Data-1: Move into the big league:Graduate from Python to Pyspark
2. Big Data-2: Move into the big league:Graduate from R to SparkR
3. Big Data: On RDDs, Dataframes,Hive QL with Pyspark and SparkR-Part 3

This post uses publicly available Webserver logs from NASA. The logs are for the months Jul 95 and Aug 95 and are a good place to start unstructured text analysis/log analysis. I highly recommend parsing these publicly available logs with regular expressions. It is only when you do that the truth of Jamie Zawinski’s pearl of wisdom

“Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems.” – Jamie Zawinksi

hits home. I spent many hours struggling with regex!!

For this post for the RDD part,  I had to refer to Dr. Fisseha Berhane’s blog post Webserver Log Analysis and for the Pyspark part, to the Univ. of California Specialization which I had done 3 years back Big Data Analysis with Apache Spark. Once I had played around with the regex for RDDs and PySpark I managed to get SparkR and SparklyR versions to work.

The notebooks used in this post have been published and are available at

  1. logsAnalysiswithRDDs
  2. logsAnalysiswithPyspark
  3. logsAnalysiswithSparkRandSparklyR

You can also download all the notebooks from Github at WebServerLogsAnalysis

An essential and unavoidable aspect of Big Data processing is the need to process unstructured text.Web server logs are one such area which requires Big Data techniques to process massive amounts of logs. The Common Log Format also known as the NCSA Common log format, is a standardized text file format used by web servers when generating server log files. Because the format is standardized, the files can be readily analyzed.

A publicly available webserver logs is the NASA-HTTP Web server logs. This is good dataset with which we can play around to get familiar to handling web server logs. The logs can be accessed at NASA-HTTP

Description These two traces contain two month’s worth of all HTTP requests to the NASA Kennedy Space Center WWW server in Florida.

Format The logs are an ASCII file with one line per request, with the following columns:

-host making the request. A hostname when possible, otherwise the Internet address if the name could not be looked up.

-timestamp in the format “DAY MON DD HH:MM:SS YYYY”, where DAY is the day of the week, MON is the name of the month, DD is the day of the month, HH:MM:SS is the time of day using a 24-hour clock, and YYYY is the year. The timezone is -0400.

-request given in quotes.

-HTTP reply code.

-bytes in the reply.

1 Parse Web server logs with RDDs

1.1 Read NASA Web server logs

Read the logs files from NASA for the months Jul 95 and Aug 95

from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext

conf = SparkConf().setAppName("Spark-Logs-Handling").setMaster("local[*]")
sc = SparkContext.getOrCreate(conf)

sqlcontext = SQLContext(sc)
rdd = sc.textFile("/FileStore/tables/NASA_access_log_*.gz")
rdd.count()
Out[1]: 3461613

1.2Check content

Check the logs to identify the parsing rules required for the logs

i=0
for line in rdd.sample(withReplacement = False, fraction = 0.00001, seed = 100).collect():
    i=i+1
    print(line)
    if i >5:
      break
ix-stp-fl2-19.ix.netcom.com – – [03/Aug/1995:23:03:09 -0400] “GET /images/faq.gif HTTP/1.0” 200 263
slip183-1.kw.jp.ibm.net – – [04/Aug/1995:18:42:17 -0400] “GET /shuttle/missions/sts-70/images/DSC-95EC-0001.gif HTTP/1.0” 200 107133
piweba4y.prodigy.com – – [05/Aug/1995:19:17:41 -0400] “GET /icons/menu.xbm HTTP/1.0” 200 527
ruperts.bt-sys.bt.co.uk – – [07/Aug/1995:04:44:10 -0400] “GET /shuttle/countdown/video/livevideo2.gif HTTP/1.0” 200 69067
dal06-04.ppp.iadfw.net – – [07/Aug/1995:21:10:19 -0400] “GET /images/NASA-logosmall.gif HTTP/1.0” 200 786
p15.ppp-1.directnet.com – – [10/Aug/1995:01:22:54 -0400] “GET /images/KSC-logosmall.gif HTTP/1.0” 200 1204

1.3 Write the parsing rule for each of the fields

  • host
  • timestamp
  • path
  • status
  • content_bytes

1.21 Get IP address/host name

This regex is at the start of the log and includes any non-white characted

import re
rslt=(rdd.map(lambda line: re.search('\S+',line)
   .group(0))
   .take(3)) # Get the IP address \host name
rslt
Out[3]: [‘in24.inetnebr.com’, ‘uplherc.upl.com’, ‘uplherc.upl.com’]

1.22 Get timestamp

Get the time stamp

rslt=(rdd.map(lambda line: re.search(‘(\S+ -\d{4})’,line)
    .groups())
    .take(3))  #Get the  date
rslt
[(‘[01/Aug/1995:00:00:01 -0400’,),
(‘[01/Aug/1995:00:00:07 -0400’,),
(‘[01/Aug/1995:00:00:08 -0400’,)]

1.23 HTTP request

Get the HTTP request sent to Web server \w+ {GET}

# Get the REST call with ” “
rslt=(rdd.map(lambda line: re.search('"\w+\s+([^\s]+)\s+HTTP.*"',line)
    .groups())
    .take(3)) # Get the REST call
rslt
[(‘/shuttle/missions/sts-68/news/sts-68-mcc-05.txt’,),
(‘/’,),
(‘/images/ksclogo-medium.gif’,)]

1.23Get HTTP response status

Get the HTTP response to the request

rslt=(rdd.map(lambda line: re.search('"\s(\d{3})',line)
    .groups())
    .take(3)) #Get the status
rslt
Out[6]: [(‘200’,), (‘304’,), (‘304’,)]

1.24 Get content size

Get the HTTP response in bytes

rslt=(rdd.map(lambda line: re.search(‘^.*\s(\d*)$’,line)
    .groups())
    .take(3)) # Get the content size
rslt
Out[7]: [(‘1839’,), (‘0’,), (‘0’,)]

1.24 Putting it all together

Now put all the individual pieces together into 1 big regular expression and assign to the groups

  1. Host 2. Timestamp 3. Path 4. Status 5. Content_size
rslt=(rdd.map(lambda line: re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3}\s(\d*)$)',line)
    .groups())
    .take(3))
rslt
[(‘in24.inetnebr.com’,
‘ -‘,
‘ ‘,
‘-‘,
‘[01/Aug/1995:00:00:01 -0400]’,
‘”GET /shuttle/missions/sts-68/news/sts-68-mcc-05.txt HTTP/1.0″‘,
‘/shuttle/missions/sts-68/news/sts-68-mcc-05.txt’,
‘200 1839’,
‘1839’),
(‘uplherc.upl.com’,
‘ -‘,
‘ ‘,
‘-‘,
‘[01/Aug/1995:00:00:07 -0400]’,
‘”GET / HTTP/1.0″‘,
‘/’,
‘304 0’,
‘0’),
(‘uplherc.upl.com’,
‘ -‘,
‘ ‘,
‘-‘,
‘[01/Aug/1995:00:00:08 -0400]’,
‘”GET /images/ksclogo-medium.gif HTTP/1.0″‘,
‘/images/ksclogo-medium.gif’,
‘304 0’,
‘0’)]

1.25 Add a log parsing function

import re
def parse_log1(line):
    match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3}\s(\d*)$)',line)
    if match is None:    
        return(line,0)
    else:
        return(line,1)

1.26 Check for parsing failure

Check how many lines successfully parsed with the parsing function

n_logs = rdd.count()
failed = rdd.map(lambda line: parse_log1(line)).filter(lambda line: line[1] == 0).count()
print('Out of a total of {} logs, {} failed to parse'.format(n_logs,failed))
# Get the failed records line[1] == 0
failed1=rdd.map(lambda line: parse_log1(line)).filter(lambda line: line[1]==0)
failed1.take(3)
Out of a total of 3461613 logs, 38768 failed to parse
Out[10]:
[(‘gw1.att.com – – [01/Aug/1995:00:03:53 -0400] “GET /shuttle/missions/sts-73/news HTTP/1.0” 302 -‘,
0),
(‘js002.cc.utsunomiya-u.ac.jp – – [01/Aug/1995:00:07:33 -0400] “GET /shuttle/resources/orbiters/discovery.gif HTTP/1.0” 404 -‘,
0),
(‘pipe1.nyc.pipeline.com – – [01/Aug/1995:00:12:37 -0400] “GET /history/apollo/apollo-13/apollo-13-patch-small.gif” 200 12859’,
0)]

1.26 The above rule is not enough to parse the logs

It can be seen that the single rule only parses part of the logs and we cannot group the regex separately. There is an error “AttributeError: ‘NoneType’ object has no attribute ‘group'” which shows up

#rdd.map(lambda line: re.search(‘^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s(“\w+\s+([^\s]+)\s+HTTP.*”)\s(\d{3}\s(\d*)$)’,line[0]).group()).take(4)

File “/databricks/spark/python/pyspark/util.py”, line 99, in wrapper
return f(*args, **kwargs)
File “<command-1348022240961444>”, line 1, in <lambda>
AttributeError: ‘NoneType’ object has no attribute ‘group’

at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:490)

1.27 Add rule for parsing failed records

One of the issues with the earlier rule is the content_size has “-” for some logs

import re
def parse_failed(line):
    match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3}\s-$)',line)
    if match is None:        
        return(line,0)
    else:
        return(line,1)

1.28 Parse records which fail

Parse the records that fails with the new rule

failed2=rdd.map(lambda line: parse_failed(line)).filter(lambda line: line[1]==1)
failed2.take(5)
Out[13]:
[(‘gw1.att.com – – [01/Aug/1995:00:03:53 -0400] “GET /shuttle/missions/sts-73/news HTTP/1.0” 302 -‘,
1),
(‘js002.cc.utsunomiya-u.ac.jp – – [01/Aug/1995:00:07:33 -0400] “GET /shuttle/resources/orbiters/discovery.gif HTTP/1.0” 404 -‘,
1),
(‘tia1.eskimo.com – – [01/Aug/1995:00:28:41 -0400] “GET /pub/winvn/release.txt HTTP/1.0” 404 -‘,
1),
(‘itws.info.eng.niigata-u.ac.jp – – [01/Aug/1995:00:38:01 -0400] “GET /ksc.html/facts/about_ksc.html HTTP/1.0” 403 -‘,
1),
(‘grimnet23.idirect.com – – [01/Aug/1995:00:50:12 -0400] “GET /www/software/winvn/winvn.html HTTP/1.0” 404 -‘,
1)]

1.28 Add both rules

Add both rules for parsing the log.

Note it can be shown that even with both rules all the logs are not parse.Further rules may need to be added

import re
def parse_log2(line):
    # Parse logs with the rule below
    match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3})\s(\d*)$',line)
    # If match failed then use the rule below
    if match is None:
        match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3}\s-$)',line)
    if match is None:
        return (line, 0) # Return 0 for failure
    else:
        return (line, 1) # Return 1 for success

1.29 Group the different regex to groups for handling

def map2groups(line):
    match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3})\s(\d*)$',line)
    if match is None:
        match = re.search('^(\S+)((\s)(-))+\s(\[\S+ -\d{4}\])\s("\w+\s+([^\s]+)\s+HTTP.*")\s(\d{3})\s(-)$',line)    
    return(match.groups())

1.30 Parse the logs and map the groups

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])

parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))

2. Parse Web server logs with Pyspark

2.1Read data into a Pyspark dataframe

import os
logs_file_path="/FileStore/tables/" + os.path.join('NASA_access_log_*.gz')
from pyspark.sql.functions import split, regexp_extract
base_df = sqlContext.read.text(logs_file_path)
#base_df.show(truncate=False)
from pyspark.sql.functions import split, regexp_extract
split_df = base_df.select(regexp_extract('value', r'^([^\s]+\s)', 1).alias('host'),
                          regexp_extract('value', r'^.*\[(\d\d\/\w{3}\/\d{4}:\d{2}:\d{2}:\d{2} -\d{4})]', 1).alias('timestamp'),
                          regexp_extract('value', r'^.*"\w+\s+([^\s]+)\s+HTTP.*"', 1).alias('path'),
                          regexp_extract('value', r'^.*"\s+([^\s]+)', 1).cast('integer').alias('status'),
                          regexp_extract('value', r'^.*\s+(\d+)$', 1).cast('integer').alias('content_size'))
split_df.show(5,truncate=False)
+———————+————————–+———————————————–+——+————+
|host |timestamp |path |status|content_size|
+———————+————————–+———————————————–+——+————+
|199.72.81.55 |01/Jul/1995:00:00:01 -0400|/history/apollo/ |200 |6245 |
|unicomp6.unicomp.net |01/Jul/1995:00:00:06 -0400|/shuttle/countdown/ |200 |3985 |
|199.120.110.21 |01/Jul/1995:00:00:09 -0400|/shuttle/missions/sts-73/mission-sts-73.html |200 |4085 |
|burger.letters.com |01/Jul/1995:00:00:11 -0400|/shuttle/countdown/liftoff.html |304 |0 |
|199.120.110.21 |01/Jul/1995:00:00:11 -0400|/shuttle/missions/sts-73/sts-73-patch-small.gif|200 |4179 |
+———————+————————–+———————————————–+——+————+
only showing top 5 rows

2.2 Check data

bad_rows_df = split_df.filter(split_df[‘host’].isNull() |
                              split_df['timestamp'].isNull() |
                              split_df['path'].isNull() |
                              split_df['status'].isNull() |
                             split_df['content_size'].isNull())
bad_rows_df.count()
Out[20]: 33905

2.3Check no of rows which do not have digits

We have already seen that the content_type field has ‘-‘ instead of digits in RDDs

#bad_content_size_df = base_df.filter(~ base_df[‘value’].rlike(r’\d+$’))
bad_content_size_df.count()
Out[21]: 33905

2.4 Add ‘*’ to identify bad rows

To identify the rows that are bad, concatenate ‘*’ to the content_size field where the field does not have digits. It can be seen that the content_size has ‘-‘ instead of a valid number

from pyspark.sql.functions import lit, concat
bad_content_size_df.select(concat(bad_content_size_df['value'], lit('*'))).show(4,truncate=False)
+—————————————————————————————————————————————————+
|concat(value, *) |
+—————————————————————————————————————————————————+
|dd15-062.compuserve.com – – [01/Jul/1995:00:01:12 -0400] “GET /news/sci.space.shuttle/archive/sci-space-shuttle-22-apr-1995-40.txt HTTP/1.0” 404 -*|
|dynip42.efn.org – – [01/Jul/1995:00:02:14 -0400] “GET /software HTTP/1.0” 302 -* |
|ix-or10-06.ix.netcom.com – – [01/Jul/1995:00:02:40 -0400] “GET /software/winvn HTTP/1.0” 302 -* |
|ix-or10-06.ix.netcom.com – – [01/Jul/1995:00:03:24 -0400] “GET /software HTTP/1.0” 302 -* |
+—————————————————————————————————————————————————+

2.5 Fill NAs with 0s

# Replace all null content_size values with 0.

cleaned_df = split_df.na.fill({‘content_size’: 0})

3. Webserver  logs parsing with SparkR

library(SparkR)
library(stringr)
file_location = "/FileStore/tables/NASA_access_log_Jul95.gz"
file_location = "/FileStore/tables/NASA_access_log_Aug95.gz"
# Load the SparkR library


# Initiate a SparkR session
sparkR.session()
sc <- sparkR.session()
sqlContext <- sparkRSQL.init(sc)
df <- read.text(sqlContext,"/FileStore/tables/NASA_access_log_Jul95.gz")

#df=SparkR::select(df, "value")
#head(SparkR::collect(df))
#m=regexp_extract(df$value,'\\\\S+',1)

a=df %>% 
  withColumn('host', regexp_extract(df$value, '^(\\S+)', 1)) %>%
  withColumn('timestamp', regexp_extract(df$value, "((\\S+ -\\d{4}))", 2)) %>%
  withColumn('path', regexp_extract(df$value, '(\\"\\w+\\s+([^\\s]+)\\s+HTTP.*")', 2))  %>%
  withColumn('status', regexp_extract(df$value, '(^.*"\\s+([^\\s]+))', 2)) %>%
  withColumn('content_size', regexp_extract(df$value, '(^.*\\s+(\\d+)$)', 2))
#b=a%>% select(host,timestamp,path,status,content_type)
head(SparkR::collect(a),10)

1 199.72.81.55 – – [01/Jul/1995:00:00:01 -0400] “GET /history/apollo/ HTTP/1.0” 200 6245
2 unicomp6.unicomp.net – – [01/Jul/1995:00:00:06 -0400] “GET /shuttle/countdown/ HTTP/1.0” 200 3985
3 199.120.110.21 – – [01/Jul/1995:00:00:09 -0400] “GET /shuttle/missions/sts-73/mission-sts-73.html HTTP/1.0” 200 4085
4 burger.letters.com – – [01/Jul/1995:00:00:11 -0400] “GET /shuttle/countdown/liftoff.html HTTP/1.0” 304 0
5 199.120.110.21 – – [01/Jul/1995:00:00:11 -0400] “GET /shuttle/missions/sts-73/sts-73-patch-small.gif HTTP/1.0” 200 4179
6 burger.letters.com – – [01/Jul/1995:00:00:12 -0400] “GET /images/NASA-logosmall.gif HTTP/1.0” 304 0
7 burger.letters.com – – [01/Jul/1995:00:00:12 -0400] “GET /shuttle/countdown/video/livevideo.gif HTTP/1.0” 200 0
8 205.212.115.106 – – [01/Jul/1995:00:00:12 -0400] “GET /shuttle/countdown/countdown.html HTTP/1.0” 200 3985
9 d104.aa.net – – [01/Jul/1995:00:00:13 -0400] “GET /shuttle/countdown/ HTTP/1.0” 200 3985
10 129.94.144.152 – – [01/Jul/1995:00:00:13 -0400] “GET / HTTP/1.0” 200 7074
host timestamp
1 199.72.81.55 [01/Jul/1995:00:00:01 -0400
2 unicomp6.unicomp.net [01/Jul/1995:00:00:06 -0400
3 199.120.110.21 [01/Jul/1995:00:00:09 -0400
4 burger.letters.com [01/Jul/1995:00:00:11 -0400
5 199.120.110.21 [01/Jul/1995:00:00:11 -0400
6 burger.letters.com [01/Jul/1995:00:00:12 -0400
7 burger.letters.com [01/Jul/1995:00:00:12 -0400
8 205.212.115.106 [01/Jul/1995:00:00:12 -0400
9 d104.aa.net [01/Jul/1995:00:00:13 -0400
10 129.94.144.152 [01/Jul/1995:00:00:13 -0400
path status content_size
1 /history/apollo/ 200 6245
2 /shuttle/countdown/ 200 3985
3 /shuttle/missions/sts-73/mission-sts-73.html 200 4085
4 /shuttle/countdown/liftoff.html 304 0
5 /shuttle/missions/sts-73/sts-73-patch-small.gif 200 4179
6 /images/NASA-logosmall.gif 304 0
7 /shuttle/countdown/video/livevideo.gif 200 0
8 /shuttle/countdown/countdown.html 200 3985
9 /shuttle/countdown/ 200 3985
10 / 200 7074

4 Webserver logs parsing with SparklyR

install.packages("sparklyr")
library(sparklyr)
library(dplyr)
library(stringr)
#sc <- spark_connect(master = "local", version = "2.1.0")
sc <- spark_connect(method = "databricks")
sdf <-spark_read_text(sc, name="df", path = "/FileStore/tables/NASA_access_log*.gz")
sdf
Installing package into ‘/databricks/spark/R/lib’
# Source: spark [?? x 1]
   line                                                                         
                                                                           
 1 "199.72.81.55 - - [01/Jul/1995:00:00:01 -0400] \"GET /history/apollo/ HTTP/1…
 2 "unicomp6.unicomp.net - - [01/Jul/1995:00:00:06 -0400] \"GET /shuttle/countd…
 3 "199.120.110.21 - - [01/Jul/1995:00:00:09 -0400] \"GET /shuttle/missions/sts…
 4 "burger.letters.com - - [01/Jul/1995:00:00:11 -0400] \"GET /shuttle/countdow…
 5 "199.120.110.21 - - [01/Jul/1995:00:00:11 -0400] \"GET /shuttle/missions/sts…
 6 "burger.letters.com - - [01/Jul/1995:00:00:12 -0400] \"GET /images/NASA-logo…
 7 "burger.letters.com - - [01/Jul/1995:00:00:12 -0400] \"GET /shuttle/countdow…
 8 "205.212.115.106 - - [01/Jul/1995:00:00:12 -0400] \"GET /shuttle/countdown/c…
 9 "d104.aa.net - - [01/Jul/1995:00:00:13 -0400] \"GET /shuttle/countdown/ HTTP…
10 "129.94.144.152 - - [01/Jul/1995:00:00:13 -0400] \"GET / HTTP/1.0\" 200 7074"
# … with more rows
#install.packages(“sparklyr”)
library(sparklyr)
library(dplyr)
library(stringr)
#sc <- spark_connect(master = "local", version = "2.1.0")
sc <- spark_connect(method = "databricks")
sdf <-spark_read_text(sc, name="df", path = "/FileStore/tables/NASA_access_log*.gz")
sdf <- sdf %>% mutate(host = regexp_extract(line, '^(\\\\S+)',1)) %>% 
               mutate(timestamp = regexp_extract(line, '((\\\\S+ -\\\\d{4}))',2)) %>%
               mutate(path = regexp_extract(line, '(\\\\"\\\\w+\\\\s+([^\\\\s]+)\\\\s+HTTP.*")',2)) %>%
               mutate(status = regexp_extract(line, '(^.*"\\\\s+([^\\\\s]+))',2)) %>%
               mutate(content_size = regexp_extract(line, '(^.*\\\\s+(\\\\d+)$)',2))

5 Hosts

5.1  RDD

5.11 Parse and map to hosts to groups

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])
parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))

# Create tuples of (host,1) and apply reduceByKey() and order by descending
rslt=(parsed_rdd2.map(lambda x😦x[0],1))
                 .reduceByKey(lambda a,b:a+b)
                 .takeOrdered(10, lambda x: -x[1]))
rslt
Out[18]:
[(‘piweba3y.prodigy.com’, 21988),
(‘piweba4y.prodigy.com’, 16437),
(‘piweba1y.prodigy.com’, 12825),
(‘edams.ksc.nasa.gov’, 11962),
(‘163.206.89.4’, 9697),
(‘news.ti.com’, 8161),
(‘www-d1.proxy.aol.com’, 8047),
(‘alyssa.prodigy.com’, 8037),
(‘siltb10.orl.mmc.com’, 7573),
(‘www-a2.proxy.aol.com’, 7516)]

5.12Plot counts of hosts

import seaborn as sns

import pandas as pd import matplotlib.pyplot as plt df=pd.DataFrame(rslt,columns=[‘host’,‘count’]) sns.barplot(x=‘host’,y=‘count’,data=df) plt.subplots_adjust(bottom=0.6, right=0.8, top=0.9) plt.xticks(rotation=“vertical”,fontsize=8) display()

5.2 PySpark

5.21 Compute counts of hosts

df= (cleaned_df
     .groupBy('host')
     .count()
     .orderBy('count',ascending=False))
df.show(5)
+——————–+—–+
| host|count|
+——————–+—–+
|piweba3y.prodigy….|21988|
|piweba4y.prodigy….|16437|
|piweba1y.prodigy….|12825|
| edams.ksc.nasa.gov |11964|
| 163.206.89.4 | 9697|
+——————–+—–+
only showing top 5 rows

5.22 Plot count of hosts

import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
df1=df.toPandas()
df2 = df1.head(10)
df2.count()
sns.barplot(x='host',y='count',data=df2)
plt.subplots_adjust(bottom=0.5, right=0.8, top=0.9)
plt.xlabel("Hosts")
plt.ylabel('Count')
plt.xticks(rotation="vertical",fontsize=10)
display()

5.3 SparkR

5.31 Compute count of hosts

c <- SparkR::select(a,a$host)
df=SparkR::summarize(SparkR::groupBy(c, a$host), noHosts = count(a$host))
df1 =head(arrange(df,desc(df$noHosts)),10)
head(df1)
                  host noHosts
1 piweba3y.prodigy.com   17572
2 piweba4y.prodigy.com   11591
3 piweba1y.prodigy.com    9868
4   alyssa.prodigy.com    7852
5  siltb10.orl.mmc.com    7573
6 piweba2y.prodigy.com    5922

5.32 Plot count of hosts

library(ggplot2)
p <-ggplot(data=df1, aes(x=host, y=noHosts,fill=host)) +   geom_bar(stat="identity") + theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Host') + ylab('Count')
p

5.4 SparklyR

5.41 Compute count of Hosts

df <- sdf %>% select(host,timestamp,path,status,content_size)
df1 <- df %>% select(host) %>% group_by(host) %>% summarise(noHosts=n()) %>% arrange(desc(noHosts))
df2 <-head(df1,10)

5.42 Plot count of hosts

library(ggplot2)

p <-ggplot(data=df2, aes(x=host, y=noHosts,fill=host)) + geom_bar(stat=identity”)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab(Host’) + ylab(Count’)

p

6 Paths

6.1 RDD

6.11 Parse and map to hosts to groups

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])
parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))
rslt=(parsed_rdd2.map(lambda x😦x[5],1))
                 .reduceByKey(lambda a,b:a+b)
                 .takeOrdered(10, lambda x: -x[1]))
rslt
[(‘”GET /images/NASA-logosmall.gif HTTP/1.0″‘, 207520),
(‘”GET /images/KSC-logosmall.gif HTTP/1.0″‘, 164487),
(‘”GET /images/MOSAIC-logosmall.gif HTTP/1.0″‘, 126933),
(‘”GET /images/USA-logosmall.gif HTTP/1.0″‘, 126108),
(‘”GET /images/WORLD-logosmall.gif HTTP/1.0″‘, 124972),
(‘”GET /images/ksclogo-medium.gif HTTP/1.0″‘, 120704),
(‘”GET /ksc.html HTTP/1.0″‘, 83209),
(‘”GET /images/launch-logo.gif HTTP/1.0″‘, 75839),
(‘”GET /history/apollo/images/apollo-logo1.gif HTTP/1.0″‘, 68759),
(‘”GET /shuttle/countdown/ HTTP/1.0″‘, 64467)]

6.12 Plot counts of HTTP Requests

import seaborn as sns

df=pd.DataFrame(rslt,columns=[‘path’,‘count’]) sns.barplot(x=‘path’,y=‘count’,data=df) plt.subplots_adjust(bottom=0.7, right=0.8, top=0.9) plt.xticks(rotation=“vertical”,fontsize=8)

display()

6.2 Pyspark

6.21 Compute count of HTTP Requests

df= (cleaned_df
     .groupBy('path')
     .count()
     .orderBy('count',ascending=False))
df.show(5)
Out[20]:
+——————–+——+
| path| count|
+——————–+——+
|/images/NASA-logo…|208362|
|/images/KSC-logos…|164813|
|/images/MOSAIC-lo…|127656|
|/images/USA-logos…|126820|
|/images/WORLD-log…|125676|
+——————–+——+
only showing top 5 rows

6.22 Plot count of HTTP Requests

import matplotlib.pyplot as plt

import pandas as pd import seaborn as sns df1=df.toPandas() df2 = df1.head(10) df2.count() sns.barplot(x=‘path’,y=‘count’,data=df2)

plt.subplots_adjust(bottom=0.7, right=0.8, top=0.9) plt.xlabel(“HTTP Requests”) plt.ylabel(‘Count’) plt.xticks(rotation=90,fontsize=8)

display()

 

6.3 SparkR

6.31Compute count of HTTP requests

library(SparkR)
c <- SparkR::select(a,a$path)
df=SparkR::summarize(SparkR::groupBy(c, a$path), numRequest = count(a$path))
df1=head(df)

3.14 Plot count of HTTP Requests

library(ggplot2)
p <-ggplot(data=df1, aes(x=path, y=numRequest,fill=path)) +   geom_bar(stat="identity") + theme(axis.text.x = element_text(angle = 90, hjust = 1))+ xlab('Path') + ylab('Count')
p

6.4 SparklyR

6.41 Compute count of paths

df <- sdf %>% select(host,timestamp,path,status,content_size)
df1 <- df %>% select(path) %>% group_by(path) %>% summarise(noPaths=n()) %>% arrange(desc(noPaths))
df2 <-head(df1,10)
df2
# Source: spark [?? x 2]
# Ordered by: desc(noPaths)
   path                                    noPaths
                                        
 1 /images/NASA-logosmall.gif               208362
 2 /images/KSC-logosmall.gif                164813
 3 /images/MOSAIC-logosmall.gif             127656
 4 /images/USA-logosmall.gif                126820
 5 /images/WORLD-logosmall.gif              125676
 6 /images/ksclogo-medium.gif               121286
 7 /ksc.html                                 83685
 8 /images/launch-logo.gif                   75960
 9 /history/apollo/images/apollo-logo1.gif   68858
10 /shuttle/countdown/                       64695

6.42 Plot count of Paths

library(ggplot2)
p <-ggplot(data=df2, aes(x=path, y=noPaths,fill=path)) +   geom_bar(stat="identity")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Path') + ylab('Count')
p

7.1 RDD

7.11 Compute count of HTTP Status

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])

parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))
rslt=(parsed_rdd2.map(lambda x😦x[7],1))
                 .reduceByKey(lambda a,b:a+b)
                 .takeOrdered(10, lambda x: -x[1]))
rslt
Out[22]:
[(‘200’, 3095682),
(‘304’, 266764),
(‘302’, 72970),
(‘404’, 20625),
(‘403’, 225),
(‘500’, 65),
(‘501’, 41)]

1.37 Plot counts of HTTP response status’

import seaborn as sns

df=pd.DataFrame(rslt,columns=[‘status’,‘count’]) sns.barplot(x=‘status’,y=‘count’,data=df) plt.subplots_adjust(bottom=0.4, right=0.8, top=0.9) plt.xticks(rotation=“vertical”,fontsize=8)

display()

7.2 Pyspark

7.21 Compute count of HTTP status

status_count=(cleaned_df
                .groupBy('status')
                .count()
                .orderBy('count',ascending=False))
status_count.show()
+——+——-+
|status| count|
+——+——-+
| 200|3100522|
| 304| 266773|
| 302| 73070|
| 404| 20901|
| 403| 225|
| 500| 65|
| 501| 41|
| 400| 15|
| null| 1|

7.22 Plot count of HTTP status

Plot the HTTP return status vs the counts

df1=status_count.toPandas()

df2 = df1.head(10) df2.count() sns.barplot(x=‘status’,y=‘count’,data=df2) plt.subplots_adjust(bottom=0.5, right=0.8, top=0.9) plt.xlabel(“HTTP Status”) plt.ylabel(‘Count’) plt.xticks(rotation=“vertical”,fontsize=10) display()

7.3 SparkR

7.31 Compute count of HTTP Response status

library(SparkR)
c <- SparkR::select(a,a$status)
df=SparkR::summarize(SparkR::groupBy(c, a$status), numStatus = count(a$status))
df1=head(df)

3.16 Plot count of HTTP Response status

library(ggplot2)
p <-ggplot(data=df1, aes(x=status, y=numStatus,fill=status)) +   geom_bar(stat="identity") + theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Status') + ylab('Count')
p

7.4 SparklyR

7.41 Compute count of status

df <- sdf %>% select(host,timestamp,path,status,content_size)
df1 <- df %>% select(status) %>% group_by(status) %>% summarise(noStatus=n()) %>% arrange(desc(noStatus))
df2 <-head(df1,10)
df2
# Source: spark [?? x 2]
# Ordered by: desc(noStatus)
  status noStatus
       
1 200     3100522
2 304      266773
3 302       73070
4 404       20901
5 403         225
6 500          65
7 501          41
8 400          15
9 ""            1

7.42 Plot count of status

library(ggplot2)

p <-ggplot(data=df2, aes(x=status, y=noStatus,fill=status)) + geom_bar(stat=identity”)+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab(Status’) + ylab(Count’) p

8.1 RDD

8.12 Compute count of content size

parsed_rdd = rdd.map(lambda line: parse_log2(line)).filter(lambda line: line[1] == 1).map(lambda line : line[0])
parsed_rdd2 = parsed_rdd.map(lambda line: map2groups(line))
rslt=(parsed_rdd2.map(lambda x😦x[8],1))
                 .reduceByKey(lambda a,b:a+b)
                 .takeOrdered(10, lambda x: -x[1]))
rslt
Out[24]:
[(‘0’, 280017),
(‘786’, 167281),
(‘1204’, 140505),
(‘363’, 111575),
(‘234’, 110824),
(‘669’, 110056),
(‘5866’, 107079),
(‘1713’, 66904),
(‘1173’, 63336),
(‘3635’, 55528)]

8.21 Plot content size

import seaborn as sns

df=pd.DataFrame(rslt,columns=[‘content_size’,‘count’]) sns.barplot(x=‘content_size’,y=‘count’,data=df) plt.subplots_adjust(bottom=0.4, right=0.8, top=0.9) plt.xticks(rotation=“vertical”,fontsize=8) display()

8.2 Pyspark

8.21 Compute count of content_size

size_counts=(cleaned_df
                .groupBy('content_size')
                .count()
                .orderBy('count',ascending=False))
size_counts.show(10)
+------------+------+
|content_size| count|
+------------+------+
|           0|313932|
|         786|167709|
|        1204|140668|
|         363|111835|
|         234|111086|
|         669|110313|
|        5866|107373|
|        1713| 66953|
|        1173| 63378|
|        3635| 55579|
+------------+------+
only showing top 10 rows

8.22 Plot counts of content size

Plot the path access versus the counts

df1=size_counts.toPandas()

df2 = df1.head(10) df2.count() sns.barplot(x=‘content_size’,y=‘count’,data=df2) plt.subplots_adjust(bottom=0.5, right=0.8, top=0.9) plt.xlabel(“content_size”) plt.ylabel(‘Count’) plt.xticks(rotation=“vertical”,fontsize=10) display()

8.3 SparkR

8.31 Compute count of content size

library(SparkR)
c <- SparkR::select(a,a$content_size)
df=SparkR::summarize(SparkR::groupBy(c, a$content_size), numContentSize = count(a$content_size))
df1=head(df)
df1
     content_size numContentSize
1        28426           1414
2        78382            293
3        60053              4
4        36067              2
5        13282            236
6        41785            174
8.32 Plot count of content sizes
library(ggplot2)

p <-ggplot(data=df1, aes(x=content_size, y=numContentSize,fill=content_size)) + geom_bar(stat=identity”) + theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab(Content Size’) + ylab(Count’)

p

8.4 SparklyR

8.41Compute count of content_size

df <- sdf %>% select(host,timestamp,path,status,content_size)
df1 <- df %>% select(content_size) %>% group_by(content_size) %>% summarise(noContentSize=n()) %>% arrange(desc(noContentSize))
df2 <-head(df1,10)
df2
# Source: spark [?? x 2]
# Ordered by: desc(noContentSize)
   content_size noContentSize
                   
 1 0                   280027
 2 786                 167709
 3 1204                140668
 4 363                 111835
 5 234                 111086
 6 669                 110313
 7 5866                107373
 8 1713                 66953
 9 1173                 63378
10 3635                 55579

8.42 Plot count of content_size

library(ggplot2)
p <-ggplot(data=df2, aes(x=content_size, y=noContentSize,fill=content_size)) +   geom_bar(stat="identity")+ theme(axis.text.x = element_text(angle = 90, hjust = 1)) + xlab('Content size') + ylab('Count')
p

Conclusion: I spent many,many hours struggling with Regex and getting RDDs,Pyspark to work. Also had to spend a lot of time trying to work out the syntax for SparkR and SparklyR for parsing. After you parse the logs plotting and analysis is a piece of cake! This is definitely worth a try!

Watch this space!!

Also see
1. Practical Machine Learning with R and Python – Part 3
2. Deep Learning from first principles in Python, R and Octave – Part 5
3. My book ‘Cricket analytics with cricketr and cricpy’ is now on Amazon
4. Latency, throughput implications for the Cloud
5. Modeling a Car in Android
6. Architecting a cloud based IP Multimedia System (IMS)
7. Dabbling with Wiener filter using OpenCV

To see all posts click Index of posts

My book ‘Cricket analytics with cricketr and cricpy’ is now on Amazon

‘Cricket analytics with cricketr and cricpy – Analytics harmony with R and Python’ is now available on Amazon in both paperback ($21.99) and kindle ($9.99/Rs 449) versions. The book includes analysis of cricketers using both my R package ‘cricketr’ and my python package ‘cricpy’ for all formats of the game namely Test, ODI and T20. Both packages use data from ESPN Cricinfo Statsguru. The paperback is available on Amazon for $21.99 and the kindle version is available for $9.99/Rs 449

Pick up your copy today!

The book includes the following chapters

CONTENTS

Introduction 7
1. Cricket analytics with cricketr 9
1.1. Introducing cricketr! : An R package to analyze performances of cricketers 10
1.2. Taking cricketr for a spin – Part 1 48
1.2. cricketr digs the Ashes! 69
1.3. cricketr plays the ODIs! 97
1.4. cricketr adapts to the Twenty20 International! 139
1.5. Sixer – R package cricketr’s new Shiny avatar 168
1.6. Re-introducing cricketr! : An R package to analyze performances of cricketers 178
1.7. cricketr sizes up legendary All-rounders of yesteryear 233
1.8. cricketr flexes new muscles: The final analysis 277
1.9. The Clash of the Titans in Test and ODI cricket 300
1.10. Analyzing performances of cricketers using cricketr template 338
2. Cricket analytics with cricpy 352
2.1 Introducing cricpy:A python package to analyze performances of cricketers 353
2.2 Cricpy takes a swing at the ODIs 405
Analysis of Top 4 batsman 448
2.3 Cricpy takes guard for the Twenty20s 449
2.4 Analyzing batsmen and bowlers with cricpy template 490
9. Average runs against different opposing teams 493
3. Other cricket posts in R 500
3.1 Analyzing cricket’s batting legends – Through the mirage with R 500
3.2 Mirror, mirror … the best batsman of them all? 527
4. Appendix 541
Cricket analysis with Machine Learning using Octave 541
4.1 Informed choices through Machine Learning – Analyzing Kohli, Tendulkar and Dravid 542
4.2 Informed choices through Machine Learning-2 Pitting together Kumble, Kapil, Chandra 555
Further reading 569
Important Links 570

Also see
1. My book “Deep Learning from first principles” now on Amazon
2. Practical Machine Learning with R and Python – Part 1
3. Revisiting World Bank data analysis with WDI and gVisMotionChart
4. Natural language processing: What would Shakespeare say?
5. Optimal Cloud Computing
6. Pitching yorkpy … short of good length to IPL – Part 1
7. Computer Vision: Ramblings on derivatives, histograms and contours

To see all posts click Index of posts

Big Data: On RDDs, Dataframes,Hive QL with Pyspark and SparkR-Part 3

Some people, when confronted with a problem, think “I know, I’ll use regular expressions.” Now they have two problems. – Jamie Zawinski

Some programmers, when confronted with a problem, think “I know, I’ll use floating point arithmetic.” Now they have 1.999999999997 problems. – @tomscott

Some people, when confronted with a problem, think “I know, I’ll use multithreading”. Nothhw tpe yawrve o oblems. – @d6

Some people, when confronted with a problem, think “I know, I’ll use versioning.” Now they have 2.1.0 problems. – @JaesCoyle

Some people, when faced with a problem, think, “I know, I’ll use binary.” Now they have 10 problems. – @nedbat

Introduction

The power of Spark, which operates on in-memory datasets, is the fact that it stores the data as collections using Resilient Distributed Datasets (RDDs), which are themselves distributed in partitions across clusters. RDDs, are a fast way of processing data, as the data is operated on parallel based on the map-reduce paradigm. RDDs can be be used when the operations are low level. RDDs, are typically used on unstructured data like logs or text. For structured and semi-structured data, Spark has a higher abstraction called Dataframes. Handling data through dataframes are extremely fast as they are Optimized using the Catalyst Optimization engine and the performance is orders of magnitude faster than RDDs. In addition Dataframes also use Tungsten which handle memory management and garbage collection more effectively.

The picture below shows the performance improvement achieved with Dataframes over RDDs

Benefits from Project Tungsten

Npte: The above data and graph is taken from the course Big Data Analysis with Apache Spark at edX, UC Berkeley
This post is a continuation of my 2 earlier posts
1. Big Data-1: Move into the big league:Graduate from Python to Pyspark
2. Big Data-2: Move into the big league:Graduate from R to SparkR

In this post I perform equivalent operations on a small dataset using RDDs, Dataframes in Pyspark & SparkR and HiveQL. As in some of my earlier posts, I have used the tendulkar.csv file for this post. The dataset is small and allows me to do most everything from data cleaning, data transformation and grouping etc.
You can clone fork the notebooks from github at Big Data:Part 3

The notebooks have also been published and can be accessed below

  1. Big Data-1: On RDDs, DataFrames and HiveQL with Pyspark
  2. Big Data-2:On RDDs, Dataframes and HiveQL with SparkR

1. RDD – Select all columns of tables

from pyspark import SparkContext 
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
rdd.map(lambda line: (line.split(","))).take(5)
Out[90]: [[‘Runs’, ‘Mins’, ‘BF’, ‘4s’, ‘6s’, ‘SR’, ‘Pos’, ‘Dismissal’, ‘Inns’, ‘Opposition’, ‘Ground’, ‘Start Date’], [’15’, ’28’, ’24’, ‘2’, ‘0’, ‘62.5’, ‘6’, ‘bowled’, ‘2’, ‘v Pakistan’, ‘Karachi’, ’15-Nov-89′], [‘DNB’, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘-‘, ‘4’, ‘v Pakistan’, ‘Karachi’, ’15-Nov-89′], [’59’, ‘254’, ‘172’, ‘4’, ‘0’, ‘34.3’, ‘6’, ‘lbw’, ‘1’, ‘v Pakistan’, ‘Faisalabad’, ’23-Nov-89′], [‘8′, ’24’, ’16’, ‘1’, ‘0’, ’50’, ‘6’, ‘run out’, ‘3’, ‘v Pakistan’, ‘Faisalabad’, ’23-Nov-89′]]

1b.RDD – Select columns 1 to 4

from pyspark import SparkContext 
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
rdd.map(lambda line: (line.split(",")[0:4])).take(5)
Out[91]:
[[‘Runs’, ‘Mins’, ‘BF’, ‘4s’],
[’15’, ’28’, ’24’, ‘2’],
[‘DNB’, ‘-‘, ‘-‘, ‘-‘],
[’59’, ‘254’, ‘172’, ‘4’],
[‘8′, ’24’, ’16’, ‘1’]]

1c. RDD – Select specific columns 0, 10

from pyspark import SparkContext 
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
df=rdd.map(lambda line: (line.split(",")))
df.map(lambda x: (x[10],x[0])).take(5)
Out[92]:
[(‘Ground’, ‘Runs’),
(‘Karachi’, ’15’),
(‘Karachi’, ‘DNB’),
(‘Faisalabad’, ’59’),
(‘Faisalabad’, ‘8’)]

2. Dataframe:Pyspark – Select all columns

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.show(5)
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
|Runs|Mins| BF| 4s| 6s| SR|Pos|Dismissal|Inns|Opposition| Ground|Start Date|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
| 15| 28| 24| 2| 0| 62.5| 6| bowled| 2|v Pakistan| Karachi| 15-Nov-89|
| DNB| -| -| -| -| -| -| -| 4|v Pakistan| Karachi| 15-Nov-89|
| 59| 254|172| 4| 0| 34.3| 6| lbw| 1|v Pakistan|Faisalabad| 23-Nov-89|
| 8| 24| 16| 1| 0| 50| 6| run out| 3|v Pakistan|Faisalabad| 23-Nov-89|
| 41| 124| 90| 5| 0|45.55| 7| bowled| 1|v Pakistan| Lahore| 1-Dec-89|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
only showing top 5 rows

2a. Dataframe:Pyspark- Select specific columns

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.select("Runs","BF","Mins").show(5)
+—-+—+—-+
|Runs| BF|Mins|
+—-+—+—-+
| 15| 24| 28|
| DNB| -| -|
| 59|172| 254|
| 8| 16| 24|
| 41| 90| 124|
+—-+—+—-+

3. Dataframe:SparkR – Select all columns

# Load the SparkR library
library(SparkR)
# Initiate a SparkR session
sparkR.session()
tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

# Check the dimensions of the dataframe
df=SparkR::select(tendulkar1,"*")
head(SparkR::collect(df))

  Runs Mins  BF 4s 6s    SR Pos Dismissal Inns Opposition     Ground Start Date
1   15   28  24  2  0  62.5   6    bowled    2 v Pakistan    Karachi  15-Nov-89
2  DNB    -   -  -  -     -   -         -    4 v Pakistan    Karachi  15-Nov-89
3   59  254 172  4  0  34.3   6       lbw    1 v Pakistan Faisalabad  23-Nov-89
4    8   24  16  1  0    50   6   run out    3 v Pakistan Faisalabad  23-Nov-89
5   41  124  90  5  0 45.55   7    bowled    1 v Pakistan     Lahore   1-Dec-89
6   35   74  51  5  0 68.62   6       lbw    1 v Pakistan    Sialkot   9-Dec-89

3a. Dataframe:SparkR- Select specific columns

# Load the SparkR library
library(SparkR)
# Initiate a SparkR session
sparkR.session()
tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

# Check the dimensions of the dataframe
df=SparkR::select(tendulkar1, "Runs", "BF","Mins")
head(SparkR::collect(df))
Runs BF Mins
1 15 24 28
2 DNB – –
3 59 172 254
4 8 16 24
5 41 90 124
6 35 51 74

4. Hive QL – Select all columns

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.createOrReplaceTempView('tendulkar1_table')
spark.sql('select  * from tendulkar1_table limit 5').show(10, truncate = False)
+—-+—+—-++—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
|Runs|Mins|BF |4s |6s |SR |Pos|Dismissal|Inns|Opposition|Ground |Start Date|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
|15 |28 |24 |2 |0 |62.5 |6 |bowled |2 |v Pakistan|Karachi |15-Nov-89 |
|DNB |- |- |- |- |- |- |- |4 |v Pakistan|Karachi |15-Nov-89 |
|59 |254 |172|4 |0 |34.3 |6 |lbw |1 |v Pakistan|Faisalabad|23-Nov-89 |
|8 |24 |16 |1 |0 |50 |6 |run out |3 |v Pakistan|Faisalabad|23-Nov-89 |
|41 |124 |90 |5 |0 |45.55|7 |bowled |1 |v Pakistan|Lahore |1-Dec-89 |
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+

4a. Hive QL – Select specific columns

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.createOrReplaceTempView('tendulkar1_table')
spark.sql('select  Runs, BF,Mins from tendulkar1_table limit 5').show(10, truncate = False)
+—-+—+—-+
|Runs|BF |Mins|
+—-+—+—-+
|15 |24 |28 |
|DNB |- |- |
|59 |172|254 |
|8 |16 |24 |
|41 |90 |124 |
+—-+—+—-+

5. RDD – Filter rows on specific condition

from pyspark import SparkContext
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
df=(rdd.map(lambda line: line.split(",")[:])
      .filter(lambda x: x !="DNB")
      .filter(lambda x: x!= "TDNB")
      .filter(lambda x: x!="absent")
      .map(lambda x: [x[0].replace("*","")] + x[1:]))

df.take(5)

Out[97]:
[[‘Runs’,
‘Mins’,
‘BF’,
‘4s’,
‘6s’,
‘SR’,
‘Pos’,
‘Dismissal’,
‘Inns’,
‘Opposition’,
‘Ground’,
‘Start Date’],
[’15’,
’28’,
’24’,
‘2’,
‘0’,
‘62.5’,
‘6’,
‘bowled’,
‘2’,
‘v Pakistan’,
‘Karachi’,
’15-Nov-89′],
[‘DNB’,
‘-‘,
‘-‘,
‘-‘,
‘-‘,
‘-‘,
‘-‘,
‘-‘,
‘4’,
‘v Pakistan’,
‘Karachi’,
’15-Nov-89′],
[’59’,
‘254’,
‘172’,
‘4’,
‘0’,
‘34.3’,
‘6’,
‘lbw’,
‘1’,
‘v Pakistan’,
‘Faisalabad’,
’23-Nov-89′],
[‘8′,
’24’,
’16’,
‘1’,
‘0’,
’50’,
‘6’,
‘run out’,
‘3’,
‘v Pakistan’,
‘Faisalabad’,
’23-Nov-89′]]

5a. Dataframe:Pyspark – Filter rows on specific condition

from pyspark.sql import SparkSession
from pyspark.sql.functions import regexp_replace
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'DNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'TDNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'absent')
tendulkar1 = tendulkar1.withColumn('Runs', regexp_replace('Runs', '[*]', ''))
tendulkar1.show(5)
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
|Runs|Mins| BF| 4s| 6s| SR|Pos|Dismissal|Inns|Opposition| Ground|Start Date|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
| 15| 28| 24| 2| 0| 62.5| 6| bowled| 2|v Pakistan| Karachi| 15-Nov-89|
| 59| 254|172| 4| 0| 34.3| 6| lbw| 1|v Pakistan|Faisalabad| 23-Nov-89|
| 8| 24| 16| 1| 0| 50| 6| run out| 3|v Pakistan|Faisalabad| 23-Nov-89|
| 41| 124| 90| 5| 0|45.55| 7| bowled| 1|v Pakistan| Lahore| 1-Dec-89|
| 35| 74| 51| 5| 0|68.62| 6| lbw| 1|v Pakistan| Sialkot| 9-Dec-89|
+—-+—-+—+—+—+—–+—+———+—-+———-+———-+———-+
only showing top 5 rows

5b. Dataframe:SparkR – Filter rows on specific condition

sparkR.session()

tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

print(dim(tendulkar1))
tendulkar1 <-SparkR::filter(tendulkar1,tendulkar1$Runs != "DNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "TDNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "absent")
print(dim(tendulkar1))

# Cast the string type Runs to double
withColumn(tendulkar1, "Runs", cast(tendulkar1$Runs, "double"))
head(SparkR::distinct(tendulkar1[,"Runs"]),20)
# Remove the "* indicating not out
tendulkar1$Runs=SparkR::regexp_replace(tendulkar1$Runs, "\\*", "")
df=SparkR::select(tendulkar1,"*")
head(SparkR::collect(df))

5c Hive QL – Filter rows on specific condition

from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1.createOrReplaceTempView('tendulkar1_table')
spark.sql('select  Runs, BF,Mins from tendulkar1_table where Runs NOT IN  ("DNB","TDNB","absent")').show(10, truncate = False)
+—-+—+—-+
|Runs|BF |Mins|
+—-+—+—-+
|15 |24 |28 |
|59 |172|254 |
|8 |16 |24 |
|41 |90 |124 |
|35 |51 |74 |
|57 |134|193 |
|0 |1 |1 |
|24 |44 |50 |
|88 |266|324 |
|5 |13 |15 |
+—-+—+—-+
only showing top 10 rows

6. RDD – Find rows where Runs > 50

from pyspark import SparkContext
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
df=rdd.map(lambda line: (line.split(",")))
df=rdd.map(lambda line: line.split(",")[0:4]) \
   .filter(lambda x: x[0] not in ["DNB", "TDNB", "absent"])
df1=df.map(lambda x: [x[0].replace("*","")] + x[1:4])
header=df1.first()
df2=df1.filter(lambda x: x !=header)
df3=df2.map(lambda x: [float(x[0])] +x[1:4])
df3.filter(lambda x: x[0]>=50).take(10)
Out[101]: 
[[59.0, '254', '172', '4'],
 [57.0, '193', '134', '6'],
 [88.0, '324', '266', '5'],
 [68.0, '216', '136', '8'],
 [119.0, '225', '189', '17'],
 [148.0, '298', '213', '14'],
 [114.0, '228', '161', '16'],
 [111.0, '373', '270', '19'],
 [73.0, '272', '208', '8'],
 [50.0, '158', '118', '6']]

6a. Dataframe:Pyspark – Find rows where Runs >50

from pyspark.sql import SparkSession

from pyspark.sql.functions import regexp_replace
from pyspark.sql.types import IntegerType
spark = SparkSession.builder.appName('Read CSV DF').getOrCreate()
tendulkar1 = spark.read.format('csv').option('header','true').load('/FileStore/tables/tendulkar.csv')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'DNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'TDNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'absent')
tendulkar1 = tendulkar1.withColumn("Runs", tendulkar1["Runs"].cast(IntegerType()))
tendulkar1.filter(tendulkar1['Runs']>=50).show(10)
+—-+—-+—+—+—+—–+—+———+—-+————–+————+———-+
|Runs|Mins| BF| 4s| 6s| SR|Pos|Dismissal|Inns| Opposition| Ground|Start Date|
+—-+—-+—+—+—+—–+—+———+—-+————–+————+———-+
| 59| 254|172| 4| 0| 34.3| 6| lbw| 1| v Pakistan| Faisalabad| 23-Nov-89|
| 57| 193|134| 6| 0|42.53| 6| caught| 3| v Pakistan| Sialkot| 9-Dec-89|
| 88| 324|266| 5| 0|33.08| 6| caught| 1| v New Zealand| Napier| 9-Feb-90|
| 68| 216|136| 8| 0| 50| 6| caught| 2| v England| Manchester| 9-Aug-90|
| 114| 228|161| 16| 0| 70.8| 4| caught| 2| v Australia| Perth| 1-Feb-92|
| 111| 373|270| 19| 0|41.11| 4| caught| 2|v South Africa|Johannesburg| 26-Nov-92|
| 73| 272|208| 8| 1|35.09| 5| caught| 2|v South Africa| Cape Town| 2-Jan-93|
| 50| 158|118| 6| 0|42.37| 4| caught| 1| v England| Kolkata| 29-Jan-93|
| 165| 361|296| 24| 1|55.74| 4| caught| 1| v England| Chennai| 11-Feb-93|
| 78| 285|213| 10| 0|36.61| 4| lbw| 2| v England| Mumbai| 19-Feb-93|
+—-+—-+—+—+—+—–+—+———+—-+————–+————+———-+

6b. Dataframe:SparkR – Find rows where Runs >50

# Load the SparkR library
library(SparkR)
sparkR.session()

tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

print(dim(tendulkar1))
tendulkar1 <-SparkR::filter(tendulkar1,tendulkar1$Runs != "DNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "TDNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "absent")
print(dim(tendulkar1))

# Cast the string type Runs to double
withColumn(tendulkar1, "Runs", cast(tendulkar1$Runs, "double"))
head(SparkR::distinct(tendulkar1[,"Runs"]),20)
# Remove the "* indicating not out
tendulkar1$Runs=SparkR::regexp_replace(tendulkar1$Runs, "\\*", "")
df=SparkR::select(tendulkar1,"*")
df=SparkR::filter(tendulkar1, tendulkar1$Runs > 50)
head(SparkR::collect(df))
  Runs Mins  BF 4s 6s    SR Pos Dismissal Inns    Opposition     Ground
1   59  254 172  4  0  34.3   6       lbw    1    v Pakistan Faisalabad
2   57  193 134  6  0 42.53   6    caught    3    v Pakistan    Sialkot
3   88  324 266  5  0 33.08   6    caught    1 v New Zealand     Napier
4   68  216 136  8  0    50   6    caught    2     v England Manchester
5  119  225 189 17  0 62.96   6   not out    4     v England Manchester
6  148  298 213 14  0 69.48   6   not out    2   v Australia     Sydney
  Start Date
1  23-Nov-89
2   9-Dec-89
3   9-Feb-90
4   9-Aug-90
5   9-Aug-90
6   2-Jan-92

 

7 RDD – groupByKey() and reduceByKey()

from pyspark import SparkContext
from pyspark.mllib.stat import Statistics
rdd = sc.textFile( "/FileStore/tables/tendulkar.csv")
df=rdd.map(lambda line: (line.split(",")))
df=rdd.map(lambda line: line.split(",")[0:]) \
   .filter(lambda x: x[0] not in ["DNB", "TDNB", "absent"])
df1=df.map(lambda x: [x[0].replace("*","")] + x[1:])
header=df1.first()
df2=df1.filter(lambda x: x !=header)
df3=df2.map(lambda x: [float(x[0])] +x[1:])
df4 = df3.map(lambda x: (x[10],x[0]))
df5=df4.reduceByKey(lambda a,b: a+b,1)
df4.groupByKey().mapValues(lambda x: sum(x) / len(x)).take(10)

[(‘Georgetown’, 81.0),
(‘Lahore’, 17.0),
(‘Adelaide’, 32.6),
(‘Colombo (SSC)’, 77.55555555555556),
(‘Nagpur’, 64.66666666666667),
(‘Auckland’, 5.0),
(‘Bloemfontein’, 85.0),
(‘Centurion’, 73.5),
(‘Faisalabad’, 27.0),
(‘Bridgetown’, 26.0)]

7a Dataframe:Pyspark – Compute mean, min and max

from pyspark.sql.functions import *
tendulkar1= (sqlContext
         .read.format("com.databricks.spark.csv")
         .options(delimiter=',', header='true', inferschema='true')
         .load("/FileStore/tables/tendulkar.csv"))
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'DNB')
tendulkar1= tendulkar1.where(tendulkar1['Runs'] != 'TDNB')
tendulkar1 = tendulkar1.withColumn('Runs', regexp_replace('Runs', '[*]', ''))
tendulkar1.select('Runs').rdd.distinct().collect()

from pyspark.sql import functions as F
df=tendulkar1[['Runs','BF','Ground']].groupby(tendulkar1['Ground']).agg(F.mean(tendulkar1['Runs']),F.min(tendulkar1['Runs']),F.max(tendulkar1['Runs']))
df.show()
————-+—————–+———+———+
| Ground| avg(Runs)|min(Runs)|max(Runs)|
+————-+—————–+———+———+
| Bangalore| 54.3125| 0| 96|
| Adelaide| 32.6| 0| 61|
|Colombo (PSS)| 37.2| 14| 71|
| Christchurch| 12.0| 0| 24|
| Auckland| 5.0| 5| 5|
| Chennai| 60.625| 0| 81|
| Centurion| 73.5| 111| 36|
| Brisbane|7.666666666666667| 0| 7|
| Birmingham| 46.75| 1| 40|
| Ahmedabad| 40.125| 100| 8|
|Colombo (RPS)| 143.0| 143| 143|
| Chittagong| 57.8| 101| 36|
| Cape Town|69.85714285714286| 14| 9|
| Bridgetown| 26.0| 0| 92|
| Bulawayo| 55.0| 36| 74|
| Delhi|39.94736842105263| 0| 76|
| Chandigarh| 11.0| 11| 11|
| Bloemfontein| 85.0| 15| 155|
|Colombo (SSC)|77.55555555555556| 104| 8|
| Cuttack| 2.0| 2| 2|
+————-+—————–+———+———+
only showing top 20 rows

7b Dataframe:SparkR – Compute mean, min and max

sparkR.session()

tendulkar1 <- read.df("/FileStore/tables/tendulkar.csv", 
                header = "true", 
                delimiter = ",", 
                source = "csv", 
                inferSchema = "true", 
                na.strings = "")

print(dim(tendulkar1))
tendulkar1 <-SparkR::filter(tendulkar1,tendulkar1$Runs != "DNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "TDNB")
print(dim(tendulkar1))
tendulkar1<-SparkR::filter(tendulkar1,tendulkar1$Runs != "absent")
print(dim(tendulkar1))

# Cast the string type Runs to double
withColumn(tendulkar1, "Runs", cast(tendulkar1$Runs, "double"))
head(SparkR::distinct(tendulkar1[,"Runs"]),20)
# Remove the "* indicating not out
tendulkar1$Runs=SparkR::regexp_replace(tendulkar1$Runs, "\\*", "")
head(SparkR::distinct(tendulkar1[,"Runs"]),20)
df=SparkR::summarize(SparkR::groupBy(tendulkar1, tendulkar1$Ground), mean = mean(tendulkar1$Runs), minRuns=min(tendulkar1$Runs),maxRuns=max(tendulkar1$Runs))
head(df,20)
          Ground       mean minRuns maxRuns
1      Bangalore  54.312500       0      96
2       Adelaide  32.600000       0      61
3  Colombo (PSS)  37.200000      14      71
4   Christchurch  12.000000       0      24
5       Auckland   5.000000       5       5
6        Chennai  60.625000       0      81
7      Centurion  73.500000     111      36
8       Brisbane   7.666667       0       7
9     Birmingham  46.750000       1      40
10     Ahmedabad  40.125000     100       8
11 Colombo (RPS) 143.000000     143     143
12    Chittagong  57.800000     101      36
13     Cape Town  69.857143      14       9
14    Bridgetown  26.000000       0      92
15      Bulawayo  55.000000      36      74
16         Delhi  39.947368       0      76
17    Chandigarh  11.000000      11      11
18  Bloemfontein  85.000000      15     155
19 Colombo (SSC)  77.555556     104       8
20       Cuttack   2.000000       2       2

Also see
1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
2.My book ‘Deep Learning from first principles:Second Edition’ now on Amazon
3.The Clash of the Titans in Test and ODI cricket
4. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
5.Latency, throughput implications for the Cloud
6. Simulating a Web Joint in Android
5. Pitching yorkpy … short of good length to IPL – Part 1

To see all posts click Index of Posts

Analyzing T20 matches with yorkpy templates

1. Introduction

In this post I create yorkpy templates for end-to-end analysis of any T20 matches that are available on Cricsheet as yaml format. These templates can be used to analyze Intl. T20, IPL, BBL and Natwest T20. In fact they can be used for any T20 games which have been saved in the yaml format as specified by Cricsheet Cricheet.

Noteyorkpy is the clone of my R package yorkr see yorkr pads up for the Twenty20s: Part 1- Analyzing team”s match performance

With these templates you can convert all T20 match data which is in yaml format to Pandas dataframes and save them as CSV. Note The data for Intl T20, IPL, BBL and Natwest T20 have already been converted and are available at allYorkpyData. This templates is also available at Github at yorkpyTemplate. The template includes the following steps

  1. Template for conversion and setup
  2. Analysis of Any T20 match
  3. Analysis of a T20 team in all matches against another T20 team
  4. Analysis of a T20 team in all matches against all other teams
  5. Analysis of T20 batsmen and bowlers

You can recreate the files as more matches are added to Cricsheet site in IPL 2017 and future seasons. This post contains all the steps needed for detailed analysis of IPL matches, teams and IPL player. This will also be my reference in future if I decide to analyze IPL in future!

Install yorkpy with pip install yorkpy

Data conversion of the yaml files have to be done before any analysis of T20 batsmen, bowlers, any T20 match matches between any 2 T20 team or analysis of a teams performance against all other team can be done

The first step is To convert the YAML files that are available for the different T20 leagues namely Intl. T20, IPL, BBL, Natwest T20 which are available in yaml format in Cricsheet. For initial data setup we need to use slighly different functions for each of the T20 leagues since the teams are different. The function to convert yaml to Pandas dataframe and save as CSV is common for all leagues

A. For International T20

import yorkpy.analytics as yka
# COnvert yaml to pandas and save as CSV
#yka.convertAllYaml2PandasDataframesT20(".", "..\\data1")

# Save all matches between any 2 Intl T20 countries
#yka.saveAllMatchesBetween2IntlT20s(dir1)

#Save all matches between an Intl.T20 country and all other countries
#yka.saveAllMatchesAllOppositionIntlT20(dir1)

# Get batting details for a country
#yka.getTeamBattingDetails(<country>,dir=dir1, save=True)

#Get bowling details
#yka.getTeamBowlingDetails(<country>,dir=dir1, save=True)

B. For Indian Premier League (IPL)

import yorkpy.analytics as yka
# COnvert yaml to pandas and save as CSV
#yka.convertAllYaml2PandasDataframesT20(".", "..\\data1")

# Save all matches between any 2 IPL teams
#yka.saveAllMatchesBetween2IPLTeams(dir1)

#Save all matches between an IPL team and all other teams
#yka.saveAllMatchesAllOppositionIPLT20(dir1)

# Get batting details for an IPL team
#yka.getTeamBattingDetails(<team1>,dir=dir1, save=True)

#Get bowling details for an IPL team
#yka.getTeamBowlingDetails(<team1>>,dir=dir1, save=True)

C. For Big Bash League (BBL)

import yorkpy.analytics as yka
# COnvert yaml to pandas and save as CSV
#yka.convertAllYaml2PandasDataframesT20(".", "..\\data1")

# Save all matches between any 2 BBL teams
#yka.saveAllMatchesBetween2BBLTeams(dir1)

#Save all matches between an BBL team and all other teams
#yka.saveAllMatchesAllOppositionBBLT20(dir1)

# Get batting details for an BBL team
#yka.getTeamBattingDetails(<team1>,dir=dir1, save=True)

#Get bowling details for an BBL team
#yka.getTeamBowlingDetails(<team1>>,dir=dir1, save=True)

D For Natwest T20

import yorkpy.analytics as yka
# COnvert yaml to pandas and save as CSV
#yka.convertAllYaml2PandasDataframesT20(".", "..\\data1")

# Save all matches between any 2 NWB teams
#yka.saveAllMatchesBetween2NWBTeams(dir1)

#Save all matches between an NWB team and all other teams
#yka.saveAllMatchesAllOppositionNWBT20(dir1)

# Get batting details for an NWB team
#yka.getTeamBattingDetails(<team1>,dir=dir1, save=True)

#Get bowling details for an NWB team
#yka.getTeamBowlingDetails(<team1>>,dir=dir1, save=True)

Once the conversion has been done and the data has been setup we can use any of the yorkpy functions for the the 4 leagues (Intl. T20, IPL, BBL or Natwest T20) There are four classes of functions. These functions can be used for any of the

  1. Class 1 – Functions that analyze a single T20 match
  2. Class 2 – Functions that analyze the performance of a T20 team in all matches against another T20 team
  3. Class 3 – Functions that analyze the performance of a T20 team against all other teams
  4. Class 4 – Functions that analyze individual T20 batsmen or bowler

2. Class 1 functions

These functions analyze a single T20 match (Intl T20, BBL, IPL or Natwest T20) To see actual usage of Class 1 function see Pitching yorkpy … short of good length to IPL – Part 1

import yorkpy.analytics as yka
# Get scorecard
#scorecard,extras=yka.teamBattingScorecardMatch(<team1>,"Name of Team")

#Get partnership
#match=pd.read_csv("<match.csv>")
#yka.teamBatsmenPartnershipMatch(match,<team1>,<team2>,plot=True/False)

#Batsmen vs bowler
#match=pd.read_csv("<match.csv>")
#yka.teamBatsmenVsBowlersMatch(match,<team1>,<team2>,plot=True/False)

#Bowling scorecard
#match=pd.read_csv("<match.csv>")
#a=yka.teamBowlingScorecardMatch(match,<team1>)

#Wicket Kind
#match=pd.read_csv("<match.csv>")
#yka.teamBowlingWicketKindMatch((match,<team1>,<team2>)

#Wicket Match
#match=pd.read_csv("<match.csv>")
#yka.teamBowlingWicketMatch(match,<team1>,<team2>,plot=True/False)

#Bowler vs Batsman
#match=pd.read_csv("<match.csv>")
#yka.teamBowlersVsBatsmenMatch(match,<team1>,<team2>)

#Match worm chart
#match=pd.read_csv("<match.csv>")
#yka.matchWormChart(match,<team1>,<team2>,)

3. Class 2 functions

These set of functions analyze the performance a T20 team for e.g. Intl T20, BBL or Natwest T20 in all matches against another T20 team (country or IPL, BBL or Natwest T20 team. To see usages of Class 2 functions see Pitching yorkpy…on the middle and outside off-stump to IPL – Part 2

import yorkpy.analytics as yka

# Batting partnerships - Table
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#m=yka.teamBatsmenPartnershiOppnAllMatches(team1_team2_matches,<team1/team2>,report="summary/detailed", top=<n>)

# Batting partnerships - Plot
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#yka.teamBatsmenPartnershipOppnAllMatchesChart(team1_team2_matches,<team1>,<team2> plot=<True/False>, top=<N>, partnershipRuns=<M>)

#Batsmen vs Bowlers
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#yka.teamBatsmenVsBowlersOppnAllMatches(team1_team2_matches,<team1>,<team2> plot=<True/False>, top=<N>,runsScored=<M>)

# Batting scorecard
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#scorecard=yka.teamBattingScorecardOppnAllMatches(team1_team2_matches,<team1>,<team2>)

#Bowling scorecard
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#scorecard=yka.teamBowlingScorecardOppnAllMatches(team1_team2_matches,<team1>,<team2>)

#Bowling wicket kind
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#yka.teamBowlingWicketKindOppositionAllMatches(team1_team2_matches,<team1>,<team2>,plot=<True/False>,top=<N>,wickets=<M>)

#Bowler vs batsman
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#yka.teamBowlersVsBatsmenOppnAllMatches(team1_team2_matches,<team1>,<team2>,plot=<True/False>,top=<N>,runsConceded=<M>)

# Wins vs losses
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#yka.plotWinLossBetweenTeams(team1_team2_matches,<team1>,<team2>)

#Wins by win type
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#yka.plotWinsByRunOrWickets(team1_team2_matches,<team1>)

#Wins by toss decision
#team1_team2_matches = pd.read_csv(<matches_between_2_teams.csv)
#yka.plotWinsbyTossDecision(team1_team2_matches,<team1>,tossDecision=<field/bat>)

4. Class 3 functions

This set of functions deals with analyzing the performance of a T20 team (Intl. T20, IPL, BBL or Natwest T20) in all matches against all other teams. To see usages of Class 3 functions see Pitching yorkpy…swinging away from the leg stump to IPL – Part 3. After the data is save all matches between all oppositions we can use this data

import yorkpy.analytics as yka
#Batsman partnerships
#allmatches = pd.read_csv("<allmatchesForteam")
#m=yka.teamBatsmenPartnershiAllOppnAllMatches(allmatches,<team1>,report=<"summary"/"detailed", top=<N>,partnershipRuns=<M>)

#Batsmen vs Bowlers
#allmatches = pd.read_csv("<allmatchesForteam")
#yka.teamBatsmenVsBowlersAllOppnAllMatches(allmatches,<team1>,plot=<True/False>,top=N>,runsScored=<M>)

#Batting scorecard
#allmatches = pd.read_csv("<allmatchesForteam")
#scorecard=yka.teamBattingScorecardAllOppnAllMatches(allmatches,<team1>)

#Bowling scorecard
#allmatches = pd.read_csv("<allmatchesForteam")
#scorecard=yka.teamBowlingScorecardAllOppnAllMatches(allmatches,<team1>)

#Bowling wicket kind
#allmatches = pd.read_csv("<allmatchesForteam")
#yka.teamBowlingWicketKindAllOppnAllMatches(allmatches,<team1>,plot=<True/False>,top=<N>,wickets=<M>)

# Bowler vs Batsmen
#allmatches = pd.read_csv("<allmatchesForteam")
#yka.teamBowlersVsBatsmenAllOppnAllMatches(allmatches,<team1>,plot=<True/False>,top=<N>,runsConceded=<M>)

# Wins vs losses
#allmatches = pd.read_csv("<allmatchesForteam")
#yka.plotWinLossByTeamAllOpposition(allmatches,<team1>,plot=<"summary"/"detailed">)

# Wins by win type
#allmatches = pd.read_csv("<allmatchesForteam")
#yka.plotWinsByRunOrWicketsAllOpposition(allmatches,<team1>)

# Wins by toss decision
#allmatches = pd.read_csv("<allmatchesForteam")
#yka.plotWinsbyTossDecisionAllOpposition(allmatches,<team1>,tossDecision='bat'/'field',plot='summary'/'detailed')

5. Class 4 functions

This set of functions are used for analyzing individual batsman/bowler. From the converted xxx-BattingDetails.csv and xxx-BowlingDetails.csv we can get the batsman and bowler details as shown below. Subsequenly we can perform analyses of the individual batsman and bowler. To see actual usages of Class 4 functions see Pitching yorkpy … in the block hole – Part 4

import yorkpy.analytics as yka

#Batsman analyses
#Get batsman Dataframe
#batsmanDF=yka.getBatsmanDetails(<team1>,<batsman>,dir=dir1)

#Batsman Runs vs Deliveries
#yka.batsmanRunsVsDeliveries(batsmanDF,<batsmanName>)

#Batsman fours and sixes
#yka.batsmanFoursSixes(batsmanDF,<batsmanName>)


#Batsman dismissals
#yka.batsmanDismissals(batsmanDF,<batsmanName>)

#Batsman Runs vs Strike Rate
#yka.batsmanRunsVsStrikeRate(batsmanDF,<batsmanName>)

#Batsman Moving average
#yka.batsmanMovingAverage(batsmanDF,<batsmanName>)


#Batsman Cumulative average
#yka.batsmanCumulativeAverageRuns(batsmanDF,<batsmanName>)

#Batsman Cumulative Strike rate
#yka.batsmanCumulativeStrikeRate(batsmanDF,<batsmanName>)

#Batsman Runs against opposition
#yka.batsmanRunsAgainstOpposition(batsmanDF,<batsmanName>)

#Batsman Runs against opposition
#yka.batsmanRunsVenue(batsmanDF,<batsmanName>)


#Bowler analyses
#Get bowler dataframe
#bowlerDF=yka.getBowlerWicketDetails(<team1>,<bowler>dir=dir1)

#Mean economy rate
#yka.bowlerMeanEconomyRate(bowlerDF,<bowlerName>)


#Mean Economy rate
#yka.bowlerMeanEconomyRate(bowlerDF,<bowlerName>)

#Mean Runs conceded
#yka.bowlerMeanRunsConceded(bowlerDF,<bowlerName>)

#Moving average of wickets
#yka.bowlerMovingAverage((bowlerDF,<bowlerName>)

# Cumulative average of wickets
#yka.bowlerCumulativeAvgWickets(bowlerDF,<bowlerName>)

# Cumulative economy rate
#yka.bowlerCumulativeAvgEconRate(bowlerDF,<bowlerName>)

# Wicket plot
#yka.bowlerWicketPlot(df,name)

# Wicket against opposition
#yka.bowlerWicketsAgainstOpposition(bowlerDF,<bowlerName>)

# Wickets at venue
#yka.bowlerWicketsVenue(bowlerDF,<bowlerName>)

Important note: Do check out my other posts using yorkpy at yorkpy-posts

Conclusion

With the above templates detailed analyis can be done on

  • A T20 match
  • Performance of a team in all matches against another team
  • Performance of a team in all matches against all other teams
  • Individual batting and bowling performances

See also

  1. Deep Learning from first principles in Python, R and Octave – Part 5
  2. My travels through the realms of Data Science, Machine Learning, Deep Learning and (AI)
  3. Practical Machine Learning with R and Python – Part 4
  4. Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8
  5. A method to crowd source pothole marking on (Indian) roads

To see all posts click Index of posts

yorkpy takes a hat-trick, bowls out Intl. T20s, BBL and Natwest T20!!!

“Dear, dear! How queer everything is to-day! And yesterday things went on just as usual. I wonder if I’ve been changed in the night? Let me think: was I the same when I got up this morning? I almost think I can remember feeling a little different. But if I’m not the same, the next question is ’Who in the world am I? Ah, that’s the great puzzle!”

             Alice's adventures  in Wonderland, Lewis Carroll

1. Introduction

In this post, yorkpy clean bowls the following T20 formats namely International T20s, Big Bash League and Natwest T20 Blast. I take yorkpy on a spin through these T20 leagues. In the post below,I choose a random set of about 10-12 of the overall 63 functions that yorkpy has, and execute them for each of the different T20 leagues – Intl T20s, BBL and Natwest T20s. yorkpy, is the python avatar of my R package yorkr, see Introducing cricket package yorkr: Part 1- Beaten by sheer pace!

There were a couple of new functions that needed to be added for each of the T20 leagues – Intl T20, BBL and Natwest T20 to take into account the different teams in each of these leagues. Further some bugs were also ironed out in tje latest version of yorkpy. yorkpy uses data from Cricsheet . The match data is in the form of YAML files. yorkpy converts these YAML files to dataframes. YAML files are very detailed and include a ball-by-ball account of the match.

– You can clone/fork the latest code for yorkpy from github yorkpy
– This post has also been published in RPubs at yorkpy takes a hat-trick
– You can download the PDF version of this post at yorkpy takes a hat-trick

The data for IPL, Intl. T20, BBL and Natwest T20 have already been converted into pandas dataframes and saved as CSVs. You can download the converted files from Github at [allYorkpyT20Data])(https://github.com/tvganesh/allYorkpyT20Data)

yorkpy has the following 4 main classes of functions

A.Functions analyzing individual T20 match (Class 1)

This was demonstrated in Pitching yorkpy . short of good length to IPL – Part 1 The functions deal with individual T20 matches. The functions are

  1. convertYaml2PandasDataframeT20()
  2. convertAllYaml2PandasDataframesT20()
  3. teamBattingScorecardMatch()
  4. teamBatsmenPartnershipMatch()
  5. teamBatsmenVsBowlersMatch()
  6. teamBowlingScorecardMatch()
  7. teamBowlingWicketKindMatch()
  8. teamBowlingWicketRunsMatch()
  9. teamBowlingWicketMatch()
  10. teamBowlersVsBatsmenMatch()
  11. matchWormChart()

B. Functions that analyze all matches between 2 T20 teams (Class 2

Pitching yorkpy.on the middle and outside off-stump to IPL – Part 2 included functions that analyze head-to-head confrontation between any 2 T20 teams The functions are

  1. getAllMatchesBetweenTeams()
  2. saveAllMatchesBetween2IPLTeams()
  3. getAllMatchesBetweenTeams()
  4. saveAllMatchesBetween2IPLTeams()
  5. teamBatsmenPartnershiOppnAllMatches()
  6. teamBatsmenPartnershipOppnAllMatchesChart()
  7. teamBatsmenVsBowlersOppnAllMatches()
  8. teamBattingScorecardOppnAllMatches()
  9. teamBowlingScorecardOppnAllMatches()
  10. teamBowlingWicketKindOppositionAllMatches()
  11. teamBowlersVsBatsmenOppnAllMatches()
  12. plotWinLossBetweenTeams()
  13. plotWinsByRunOrWickets() 23.plotWinsbyTossDecision()

C. Functions that analyze the performance of a T20 team against all other teams (Class 3)

The post Pitching yorkpy.swinging away from the leg stump to IPL – Part 3 is based on Class C set of functions shown below

  1. getAllMatchesAllOpposition()
  2. saveAllMatchesAllOppositionIPLT20(dir1)
  3. getAllMatchesAllOpposition()
  4. saveAllMatchesAllOppositionIPLT20()
  5. teamBatsmenPartnershiAllOppnAllMatches()
  6. teamBatsmenPartnershipAllOppnAllMatchesChart()
  7. teamBatsmenVsBowlersAllOppnAllMatches()
  8. teamBattingScorecardAllOppnAllMatches()
  9. teamBowlingScorecardAllOppnAllMatches()
  10. teamBowlingWicketKindAllOppnAllMatches()
  11. teamBowlersVsBatsmenAllOppnAllMatches()
  12. plotWinLossByTeamAllOpposition()
  13. plotWinsByRunOrWicketsAllOpposition()
  14. plotWinsbyTossDecisionAllOpposition()

D. Functions that analyze performances of T20 batsmen and bowlers (Class 4)

These set of functions analyze individual batsmen and bowlers and have been used in Pitching yorkpy . in the block hole – Part 4 The functions are

  1. getTeamBattingDetails()
  2. getBatsmanDetails()
  3. batsmanRunsVsDeliveries()
  4. batsmanFoursSixes()
  5. batsmanDismissals()
  6. batsmanRunsVsStrikeRate()
  7. batsmanMovingAverage()
  8. batsmanCumulativeAverageRuns()
  9. batsmanCumulativeStrikeRate()
  10. batsmanRunsAgainstOpposition()
  11. batsmanRunsVenue
  12. getTeamBowlingDetails()
  13. getBowlerWicketDetails()
  14. bowlerMeanEconomyRate()
  15. bowlerMeanRunsConceded()
  16. bowlerMovingAverage()
  17. bowlerCumulativeAvgWickets()
  18. bowlerCumulativeAvgEconRate()
  19. bowlerWicketPlot()
  20. bowlerWicketsAgainstOpposition()
  21. bowlerWicketsVenue()

Additional new functions were added to handle Intl T20s, Big Bash League and Natwest T20 Blast, since the teams are different. They are

59. saveAllMatchesBetween2IntlT20s()
60. saveAllMatchesAllOppositionIntlT20()
61. saveAllMatchesBetween2BBLTeams()
62 saveAllMatchesAllOppositionBBLT20()
63. saveAllMatchesBetween2NWBTeams()
64. saveAllMatchesAllOppositionNWBT20()

All other functions can be used as is! You can get the help of any function in yorkpy using

import yorkpy.analytics as yka
help(yka.teamBatsmenPartnershiOppnAllMatches)
## Help on function teamBatsmenPartnershiOppnAllMatches in module yorkpy.analytics:
## 
## teamBatsmenPartnershiOppnAllMatches(matches, theTeam, report='summary', top=5)
##     Team batting partnership against a opposition all IPL matches
##     
##     Description
##     
##     This function computes the performance of batsmen against all bowlers of an oppositions in 
##     all matches. This function returns a dataframe
##     
##     Usage
##     
##     teamBatsmenPartnershiOppnAllMatches(matches,theTeam,report="summary")
##     Arguments
##     
##     matches     
##     All the matches of the team against the oppositions
##     theTeam     
##     The team for which the the batting partnerships are sought
##     report      
##     If the report="summary" then the list of top batsmen with the highest partnerships 
##     is displayed. If report="detailed" then the detailed break up of partnership is returned 
##     as a dataframe
##     top
##     The number of players to be displayed from the top
##     Value
##     
##     partnerships The data frame of the partnerships
##     
##     Note
##     
##     Maintainer: Tinniam V Ganesh tvganesh.85@gmail.com
##     
##     Author(s)
##     
##     Tinniam V Ganesh
##     
##     References
##     
##     http://cricsheet.org/
##     https://gigadom.wordpress.com/
##     
##     
##     See Also
##     
##     teamBatsmenVsBowlersOppnAllMatchesPlot
##     teamBatsmenPartnershipOppnAllMatchesChart

As I mentioned above I will be randomly choosing a set of 12 functions from Class 1,2,3,4 for each of the T20 leagues (Intl T20, BBL and NWB T20) for analysis

2. International T20s

The following functions were added for handling Intl. T20s

  1. saveAllMatchesBetween2IntlT20s()
  2. saveAllMatchesAllOppositionIntlT20()

To handle the countries in Intl. T20s below

Afghanistan, Australia, Bangladesh, Bermuda, Canada, England,Hong Kong,India, Ireland, Kenya, Nepal, Netherlands, “New Zealand, Oman,Pakistan,Scotland,South Africa, Sri Lanka, United Arab Emirates,West Indies, Zimbabwe

import os
#os.chdir('C:\\software\\cricket-package\\yorkpyT20\\t20s')
#import yorkpy.analytics as yka
#1.  Convert all YAML files to dataframes and CSV
#yka.convertAllYaml2PandasDataframesT20(".", "..\\data1")
#dir1='C:\\software\\cricket-package\\yorkpyT20\\IntlT20-Matches'
#2. Save all matches between 2 T20 teams
#yka.saveAllMatchesBetween2IntlT20s(dir1)
#3. Save all matches between a T20 team and all other teams
#dir1='C:\\software\\cricket-package\\yorkpyT20\\IntlT20-Matches'
#yka.saveAllMatchesAllOppositionIntlT20(dir1)
#4. Get batting details
#dir1='C:\\software\\cricket-package\\yorkpyT20\\IntlT20-Matches
#yka.getTeamBattingDetails("Afghanistan",dir=dir1, save=True)
#yka.getTeamBattingDetails("Australia",dir=dir1,save=True)
#yka.getTeamBattingDetails("Bangladesh",dir=dir1,save=True)
#...
#5. Get bowling details
#dir1='C:\\software\\cricket-package\\yorkpyT20\\IntlT20-Matches
#yka.getTeamBowlingDetails("Afghanistan",dir=dir1, save=True)
#yka.getTeamBowlingDetails("Australia",dir=dir1,save=True)
#yka.getTeamBowlingDetails("Bangladesh",dir=dir1,save=True)
# ...

Once the data is converted you can use the yorkpy functions. The data has been converted for Intl T20 and is available at Github at IntlT20

To use the yorkpy functions for a new league we need to initial convert the YAML files into appropriate format for processing by yorkpy functions

This will create the necessary files which are are used in the functions below

2.2 2.1 Intl. T20 – Team score card  (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\IntlT20-Matches"
path=os.path.join(dir1,".\\India-New Zealand-2007-09-16.csv")
ind_nz=pd.read_csv(path)
scorecard,extras=yka.teamBattingScorecardMatch(ind_nz,"India")
print(scorecard)
##             batsman  runs  balls  4s  6s          SR
## 0         G Gambhir    51     34   5   2  150.000000
## 1          V Sehwag    40     18   6   2  222.222222
## 2        RV Uthappa     0      2   0   0    0.000000
## 3          MS Dhoni    24     20   2   0  120.000000
## 4      Yuvraj Singh     5      7   0   0   71.428571
## 5        KD Karthik    17     12   3   0  141.666667
## 6         IK Pathan    11     10   2   0  110.000000
## 7        AB Agarkar     1      2   0   0   50.000000
## 8   Harbhajan Singh     7      6   1   0  116.666667
## 9       S Sreesanth    19     10   4   0  190.000000
## 10         RP Singh     1      1   0   0  100.000000
print(extras)
##    total  wides  noballs  legbyes  byes  penalty  extras
## 0    370      6        0        8     0        0      14

2.2 Intl. T20 -Team batsmen partnership (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\IntlT20-Matches"
path=os.path.join(dir1,".\\South Africa-Australia-2009-03-27.csv")
sa_aus=pd.read_csv(path)
yka.teamBatsmenPartnershipMatch(sa_aus,'Australia','New Zealand',plot=True)

2.3 Intl. T20 -Team bowling scorecard match (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\IntlT20-Matches"
path=os.path.join(dir1,".\\Sri Lanka-West Indies-2012-09-28.csv")
sl_wi=pd.read_csv(path)
a=yka.teamBowlingScorecardMatch(sl_wi,'Sri Lanka')
print(a)
##          bowler  overs  runs  maidens  wicket  econrate
## 0    A Mohammed      2    13        0       0       6.5
## 1  SA Campbelle      1     8        0       1       8.0
## 2     SC Selman      1     3        0       0       3.0
## 3      SF Daley      2     5        0       1       2.5
## 4     SR Taylor      2     4        0       1       2.0
## 5     TD Smartt      2    17        0       0       8.5

2.4 Intl. T20 -Match Worm chart (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\IntlT20-Matches"
path=os.path.join(dir1,".\\England-India-2012-09-29.csv")
eng_ind=pd.read_csv(path)
yka.matchWormChart(eng_ind,"England", "India")

path=os.path.join(dir1,".\\Bangladesh-Ireland-2015-12-05.csv")
ban_ire=pd.read_csv(path)
yka.matchWormChart(ban_ire,"Bangladesh", "Ireland")

2.5 Intl. T20 -Team Batting partnerships all matches 2 teams (Class 2)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\IntlT20-allMatchesBetween2Teams"
path=os.path.join(dir1,"India-England-allMatches.csv")
dc_mi_matches = pd.read_csv(path)
theTeam='India'
m=yka.teamBatsmenPartnershiOppnAllMatches(dc_mi_matches,theTeam,report="detailed", top=4)
print(m)
##      batsman  totalPartnershipRuns    non_striker  partnershipRuns
## 0   SK Raina                   265      G Gambhir                2
## 1   SK Raina                   265       KL Rahul               40
## 2   SK Raina                   265      MK Tiwary               24
## 3   SK Raina                   265       MS Dhoni              124
## 4   SK Raina                   265        P Kumar                0
## 5   SK Raina                   265      PP Chawla                4
## 6   SK Raina                   265       R Ashwin                1
## 7   SK Raina                   265      RG Sharma               16
## 8   SK Raina                   265        V Kohli               47
## 9   SK Raina                   265   Yuvraj Singh                7
## 10  MS Dhoni                   264       A Mishra                1
## 11  MS Dhoni                   264      AT Rayudu               18
## 12  MS Dhoni                   264      HH Pandya                8
## 13  MS Dhoni                   264      IK Pathan                2
## 14  MS Dhoni                   264      JJ Bumrah                2
## 15  MS Dhoni                   264      MK Pandey                3
## 16  MS Dhoni                   264  Parvez Rasool               21
## 17  MS Dhoni                   264       R Ashwin               11
## 18  MS Dhoni                   264      RA Jadeja               11
## 19  MS Dhoni                   264      RG Sharma                9
## 20  MS Dhoni                   264        RR Pant                6
## 21  MS Dhoni                   264     RV Uthappa                5
## 22  MS Dhoni                   264       SK Raina               98
## 23  MS Dhoni                   264      YK Pathan               36
## 24  MS Dhoni                   264   Yuvraj Singh               33
## 25   V Kohli                   236      AM Rahane                3
## 26   V Kohli                   236      G Gambhir               78
## 27   V Kohli                   236       KL Rahul               46
## 28   V Kohli                   236      RG Sharma                2
## 29   V Kohli                   236     RV Uthappa                4
## 30   V Kohli                   236       S Dhawan               45
## 31   V Kohli                   236       SK Raina               48
## 32   V Kohli                   236   Yuvraj Singh               10
## 33     M Raj                   176       A Sharma                2
## 34     M Raj                   176         H Kaur               18
## 35     M Raj                   176      J Goswami                6
## 36     M Raj                   176        KV Jain                5
## 37     M Raj                   176       L Kumari                5
## 38     M Raj                   176    N Niranjana                3
## 39     M Raj                   176       N Tanwar               17
## 40     M Raj                   176        PG Raut               41
## 41     M Raj                   176     R Malhotra                5
## 42     M Raj                   176     S Mandhana                8
## 43     M Raj                   176         S Naik               10
## 44     M Raj                   176       S Pandey               19
## 45     M Raj                   176       SK Naidu               37

2.6 Intl. T20 -Team Batsmen vs Bowlers all matches 2 teams (Class 2)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\IntlT20-allMatchesBetween2Teams"
path=os.path.join(dir1,"Ireland-Netherlands-allMatches.csv")
ire_nl_matches = pd.read_csv(path)
yka.teamBatsmenVsBowlersOppnAllMatches(ire_nl_matches,'Ireland',"Netherlands",plot=True,top=3,runsScored=10)

2.7 Intl. T20 -Team Bowling scorecard all matches 2 teams (Class 2)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\IntlT20-allMatchesBetween2Teams"
path=os.path.join(dir1,"Bangladesh-Nepal-allMatches.csv")
bang_nep_matches = pd.read_csv(path)
scorecard=yka.teamBowlingScorecardOppnAllMatches(bang_nep_matches,'Bangladesh',"Nepal")
print(scorecard)
##         bowler  overs  runs  maidens  wicket   econrate
## 0      B Regmi      3    14        0       1   4.666667
## 3   SP Gauchan      4    40        0       1  10.000000
## 1   JK Mukhiya      2    16        0       0   8.000000
## 2     P Khadka      3    23        0       0   7.666667
## 4    Sagar Pun      1    16        0       0  16.000000
## 5  Sompal Kami      2    21        0       0  10.500000

2.8 Intl. T20 -Team Batsmen vs Bowlers all Oppositions (Class 3)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\\IntlT20-allMatchesAllOpposition\\"
path=os.path.join(dir1,"Australia-allMatchesAllOpposition.csv")
aus_matches = pd.read_csv(path)
yka.teamBatsmenVsBowlersAllOppnAllMatches(aus_matches,"Australia",plot=True,top=3,runsScored=40)

2.9 Intl. T20 -Wins vs Losses of a team against all other teams (Class 3)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\\IntlT20-allMatchesAllOpposition\\"
path=os.path.join(dir1,"South Africa-allMatchesAllOpposition.csv")
sa_matches = pd.read_csv(path)
team1='South Africa'
yka.plotWinLossByTeamAllOpposition(sa_matches,team1,plot="detailed")

2.10 Intl. T20 -Batsmen analysis (Class 4)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\\IntlT20-BattingBowlingDetails\\"
# Rohit Sharma
name="RG Sharma"
team='India'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanCumulativeAverageRuns(df,name)

# MJ Guptill
name="MJ Guptill"
team='New Zealand'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanCumulativeStrikeRate(df,name)

2.11 Intl. T20 -Bowler analysis (Class 4)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyT20\\\IntlT20-BattingBowlingDetails\\"
# Shakib Al Hasan
name="Shakib Al Hasan"
team='Bangladesh'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMeanEconomyRate(df,name)

# Rashid Khan
name="SL Malinga"
team='Sri Lanka'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketsAgainstOpposition(df,name)

3. Big Bash League

The following functions for added to handle BBL teams

  1. saveAllMatchesBetween2BBLTeams()
  2. saveAllMatchesAllOppositionBBLT20

The BBL teams are included are Adelaide Strikers, Brisbane Heat, Hobart Hurricanes, Melbourne Renegades, Perth Scorchers, Sydney Sixers, Sydney Thunder

To use the yorkpy functions first the YAML files have to be converted into pandas dataframe and then saved as CSV as shown below

import os
import yorkpy.analytics as yka
os.chdir('C:\\software\\cricket-package\\yorkpyBBL\\bbl')
#1. Convert all YAML files to dataframes and save as CSV
#yka.convertAllYaml2PandasDataframesT20(".", "..\\BBLT20-Matches")
#2. Save all matches between 2 BBL teams
dir1='C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-Matches'
#yka.saveAllMatchesBetween2BBLTeams(dir1)
#3. Save T20 matches between a BBL team and all other teams
dir1='C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-Matches'
#yka.saveAllMatchesAllOppositionBBLT20(dir1)
#4. Get the batting details
dir1='C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-Matches'
#yka.getTeamBattingDetails("Adelaide Strikers",dir=dir1, save=True)
#yka.getTeamBattingDetails("Brisbane Heat",dir=dir1,save=True)
#yka.getTeamBattingDetails("Hobart Hurricanes",dir=dir1,save=True)
#...
# Get the bowling details
dir1='C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-Matches'
#yka.getTeamBowlingDetails("Adelaide Strikers",dir=dir1, save=True)
#yka.getTeamBowlingDetails("Brisbane Heat",dir=dir1,save=True)
#yka.getTeamBowlingDetails("Hobart Hurricanes",dir=dir1,save=True)
#...

The functions below perform analysis on the generated files from above. The YAML files have already been converted and are available at Github at BBL

3.1 Big Bash League – Team score card (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-Matches"
path=os.path.join(dir1,".\\Adelaide Strikers-Brisbane Heat-2012-12-13.csv")
as_bh=pd.read_csv(path)
scorecard,extras=yka.teamBattingScorecardMatch(as_bh,"Brisbane Heat")
print(scorecard)
##          batsman  runs  balls  4s  6s          SR
## 0  LA Pomersbach    65     42   8   2  154.761905
## 1       JR Hopes     1      2   0   0   50.000000
## 2       JA Burns    37     31   2   2  119.354839
## 3   DT Christian    12     15   0   0   80.000000
## 4    NLTC Perera    12      4   0   2  300.000000
## 5        CA Lynn    19     18   1   1  105.555556
## 6    BCJ Cutting    13      5   0   2  260.000000
## 7     PJ Forrest    12      8   0   1  150.000000
## 8     CD Hartley     5      2   1   0  250.000000
print(extras)
##    total  wides  noballs  legbyes  byes  penalty  extras
## 0    371     10        2        5     0        0      17

3.2 Big Bash League -Team batsmen vs Bowlers (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-Matches"
path=os.path.join(dir1,".\\Hobart Hurricanes-Melbourne Renegades-2012-01-18.csv")
hh_mr=pd.read_csv(path)
yka.teamBatsmenVsBowlersMatch(hh_mr,'Hobart Hurricanes','Melbourne Renegades',plot=True)

3.3 Big Bash League -Team bowling scorecard match (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-Matches"
path=os.path.join(dir1,".\\Melbourne Stars-Sydney Thunder-2016-01-24.csv")
ms_st=pd.read_csv(path)
a=yka.teamBowlingScorecardMatch(ms_st,'Sydney Thunder')
print(a)
##           bowler  overs  runs  maidens  wicket   econrate
## 0        A Zampa      4    32        0       2   8.000000
## 1  BW Hilfenhaus      2    21        0       0  10.500000
## 2      DJ Hussey      1     9        0       1   9.000000
## 3     DJ Worrall      3    42        0       0  14.000000
## 4      EP Gulbis      2    19        0       0   9.500000
## 5        MA Beer      3    25        0       1   8.333333
## 6     MP Stoinis      4    30        0       3   7.500000

3.4 Big Bash League – Match Worm chart (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-Matches"
path=os.path.join(dir1,".\\Sydney Sixers-Melbourne Stars-2011-12-27.csv")
ss_ms=pd.read_csv(path)
yka.matchWormChart(ss_ms,"Melbourne Stars", "Sydney Sixers")

path=os.path.join(dir1,".\\Hobart Hurricanes-Brisbane Heat-2015-01-02.csv")
hh_bh=pd.read_csv(path)
yka.matchWormChart(hh_bh,"Hobart Hurricanes", "Brisbane Heat")

3.5 Big Bash League -Team Batting partnerships all matches 2 teams (Class 2)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-allMatchesBetween2Teams"
path=os.path.join(dir1,"Brisbane Heat-Adelaide Strikers-allMatches.csv")
bh_as_matches = pd.read_csv(path)
yka.teamBatsmenPartnershipOppnAllMatchesChart(bh_as_matches,"Brisbane Heat","Adelaide Strikers",plot=True, top=4, partnershipRuns=20)

3.6 Big Bash League -Team Bowling wicket kind all matches 2 teams (Class 2)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-allMatchesBetween2Teams"
path=os.path.join(dir1,"Sydney Sixers-Perth Scorchers-allMatches.csv")
ss_ps_matches = pd.read_csv(path)
yka.teamBowlingWicketKindOppositionAllMatches(ss_ps_matches,'Perth Scorchers','Sydney Sixers',plot=True,top=5,wickets=1)

3.7 Big Bash League -Team Bowling scorecard all teams (Class 3)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-allMatchesAllOpposition"
path=os.path.join(dir1,"Hobart Hurricanes-allMatchesAllOpposition.csv")
hh_matches = pd.read_csv(path)
scorecard=yka.teamBowlingScorecardAllOppnAllMatches(hh_matches,"Hobart Hurricanes")
print(scorecard)
##              bowler  overs  runs  maidens  wicket   econrate
## 16            B Lee     20   132        0       9   6.600000
## 30         CJ McKay     13   110        0       9   8.461538
## 88    NJ Rimmington     16   103        1       9   6.437500
## 67      JW Hastings     15    88        0       8   5.866667
## 63      JP Faulkner     15   146        0       7   9.733333
## 27        CJ Gannon     17   147        1       7   8.647059
## 93          NM Lyon      8    51        0       7   6.375000
## 20      BCJ Cutting     27   226        0       7   8.370370
## 48          GB Hogg     22   167        0       7   7.590909
## 107       SM Boland     12    96        0       7   8.000000
## 15       B Laughlin     13    99        0       7   7.615385
## 87      MT Steketee     15   134        0       5   8.933333
## 121    Yasir Arafat      9    48        0       4   5.333333
## 96       PJ Cummins      8    83        0       4  10.375000
## 46      Fawad Ahmed     11    64        0       4   5.818182
## 76          MA Beer     12    63        0       4   5.250000
## 108     SNJ O'Keefe     15   104        0       4   6.933333
## 75   M Muralitharan      7    31        0       4   4.428571
## 10           AJ Tye     16   127        0       4   7.937500
## 52          J Botha     13    94        0       4   7.230769
## 56     JL Pattinson      7    71        0       4  10.142857
## 62   JP Behrendorff     16   119        0       4   7.437500
## 3           AC Agar     12    87        0       4   7.250000
## 24     BM Edmondson      4    40        0       4  10.000000
## 37        DJ Hussey      8    47        0       3   5.875000
## 49       GJ Maxwell      8    65        0       3   8.125000
## 84       MN Samuels      4    22        0       3   5.500000
## 81         MG Neser      5    54        0       3  10.800000
## 44     DT Christian      9   114        0       3  12.666667
## 50        GS Sandhu      7    51        0       3   7.285714
## ..              ...    ...   ...      ...     ...        ...
## 43        DP Nannes      8    58        0       1   7.250000
## 51         IA Moran      4    25        0       1   6.250000
## 55         JK Lalor     10    82        0       1   8.200000
## 54        JH Kallis      3    18        0       1   6.000000
## 73   LR Butterworth      4    25        0       1   6.250000
## 4      AC McDermott      2    28        0       1  14.000000
## 70         LA Doran      4    38        0       1   9.500000
## 69    KW Richardson      6    44        0       1   7.333333
## 119     WD Sheridan      2     6        0       0   3.000000
## 2       AB McDonald      1    15        0       0  15.000000
## 115      TD Andrews      3    23        0       0   7.666667
## 11          AK Heal      4    33        0       0   8.250000
## 7        AD Russell      4    40        0       0  10.000000
## 8          AJ Finch      2    15        0       0   7.500000
## 9         AJ Turner      3    28        0       0   9.333333
## 60        JM Mennie      1    20        0       0  20.000000
## 18        BA Stokes      1     9        0       0   9.000000
## 26         CH Gayle      1    16        0       0  16.000000
## 28         CJ Green      4    44        0       0  11.000000
## 95   PD Collingwood      2    20        0       0  10.000000
## 31       CJ Simmons      4    21        0       0   5.250000
## 59       JM Holland      3    34        0       0  11.333333
## 36         DJ Bravo      6    64        0       0  10.666667
## 38     DJ Pattinson      2    16        0       0   8.000000
## 41       DJ Worrall      8    90        0       0  11.250000
## 72      LN O'Connor      6    56        0       0   9.333333
## 71        LJ Wright      3    27        0       0   9.000000
## 68       KA Pollard      1     7        0       0   7.000000
## 58       JM Herrick      4    23        0       0   5.750000
## 92       NM Hauritz      5    42        0       0   8.400000
## 
## [122 rows x 6 columns]

3.8 Big Bash League -Plot wins vs losses against all teams(Class 3)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-allMatchesAllOpposition"
path=os.path.join(dir1,"Sydney Sixers-allMatchesAllOpposition.csv")
ss_matches = pd.read_csv(path)
yka.plotWinLossByTeamAllOpposition(ss_matches,'Sydney Sixers')

3.9 Big Bash League -Wins vs losses by toss decision (Class 3)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-allMatchesAllOpposition"
path=os.path.join(dir1,"Adelaide Strikers-allMatchesAllOpposition.csv")
as_matches = pd.read_csv(path)
yka.plotWinsByRunOrWicketsAllOpposition(as_matches,'Adelaide Strikers')

3.10 Big Bash League -Batsmen Analysis (Class 4)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-BattingBowlingDetails"
# CA Lynn
name="CA Lynn"
team='Brisbane Heat'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVsStrikeRate(df,name)

# UT Khawaja
name="UT Khawaja"
team='Sydney Thunder'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsAgainstOpposition(df,name)

3.11Big Bash League – Bowler analysis (Class 4)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyBBL\\BBLT20-BattingBowlingDetails"
# CJ McKay
name="CJ McKay"
team='Sydney Thunder'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerCumulativeAvgWickets(df,name)

# AU Rashid
name="AU Rashid"
team='Adelaide Strikers'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerCumulativeAvgEconRate(df,name)

4. Natwest T20 Blast

The following functions for added to handle Natwest T20 teams

  1. saveAllMatchesBetween2NWBTeams()
  2. saveAllMatchesAllOppositionNWBT20

The Natwest teams are
Derbyshire, Durham, Essex, Glamorgan, Gloucestershire, Hampshire, Kent,Lancashire, Leicestershire, Middlesex,Northamptonshire, Nottinghamshire, Somerset, Surrey, Sussex, Warwickshire, Worcestershire,Yorkshire

In order to perform analysis with yorkpy, the YAML data has to be converted to pandas dataframe and saves as CSV as shown

#import os
#import yorkpy.analytics as yka
#os.chdir('C:\\software\\cricket-package\\yorkpyNWB\\nwb')
#1. Convert YAML to dataframes and save as CSV
#yka.convertAllYaml2PandasDataframesT20(".", "..\\NWBT20-Matches")
#2. Save all matches between 2 NWBT20 teams
#dir1='C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-Matches'
#yka.saveAllMatchesBetween2NWBTeams(dir1)
#3. Save all matches between a NWB T20 team and all other teams
#dir1='C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-Matches'
#yka.saveAllMatchesAllOppositionNWBT20(dir1)
#4. Compute the batting details
dir1='C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-Matches'
#yka.getTeamBattingDetails("Derbyshire",dir=dir1, save=True)
#yka.getTeamBattingDetails("Durham",dir=dir1,save=True)
#yka.getTeamBattingDetails("Essex",dir=dir1,save=True)
#..
#5. Compute bowling details
dir1='C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-Matches'
#yka.getTeamBowlingDetails("Derbyshire",dir=dir1, save=True)
#yka.getTeamBowlingDetails("Durham",dir=dir1,save=True)
#yka.getTeamBowlingDetails("Essex",dir=dir1,save=True)
#...

Once the data is converted all yorkpy functions can be used. This has already been done and is available at github NWB

4.1 Natwest T20 Blast – Team score card (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\\yorkpyNWB\\NWBT20-Matches"
path=os.path.join(dir1,".\\Durham-Yorkshire-2016-08-20.csv")
d_y=pd.read_csv(path)
scorecard,extras=yka.teamBattingScorecardMatch(d_y,"Durham")
print(scorecard)
##           batsman  runs  balls  4s  6s          SR
## 0     MD Stoneman    25     20   4   0  125.000000
## 1     KK Jennings    11     13   1   0   84.615385
## 2       BA Stokes    56     37   4   3  151.351351
## 3   MJ Richardson    29     23   4   1  126.086957
## 4     JTA Burnham    17     15   1   1  113.333333
## 5      RD Pringle    10      9   1   0  111.111111
## 6  PD Collingwood     2      3   0   0   66.666667
## 7        U Arshad     1      1   0   0  100.000000
print(extras)
##    total  wides  noballs  legbyes  byes  penalty  extras
## 0    305      2        0        5     0        0       7

4.2 Natwest T20 Blast -Team batsmen vs Bowlers (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\\yorkpyNWB\\NWBT20-Matches"
path=os.path.join(dir1,".\\Derbyshire-Lancashire-2016-07-13.csv")
d_l=pd.read_csv(path)
yka.teamBatsmenVsBowlersMatch(d_l,'Lancashire','Derbyshire',plot=True)

4.3 Natwest T20 Blast -Team bowling scorecard match (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\\yorkpyNWB\\NWBT20-Matches"
path=os.path.join(dir1,".\\Essex-Surrey-2016-05-20.csv")
e_s=pd.read_csv(path)
a=yka.teamBowlingScorecardMatch(e_s,'Essex')
print(a)
##           bowler  overs  runs  maidens  wicket   econrate
## 0  Azhar Mahmood      3    38        0       4  12.666667
## 1       GJ Batty      4    33        0       1   8.250000
## 2       JE Burke      1    18        0       0  18.000000
## 3     MW Pillans      3    28        0       0   9.333333
## 4      SM Curran      4    23        0       2   5.750000
## 5      TK Curran      4    21        0       3   5.250000

4.4 Natwest T20 Blast -Match Worm chart (Class 1)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\\yorkpyNWB\\NWBT20-Matches"
path=os.path.join(dir1,".\\Gloucestershire-Glamorgan-2016-06-10.csv")
ss_ms=pd.read_csv(path)
yka.matchWormChart(ss_ms,"Gloucestershire", "Glamorgan")

path=os.path.join(dir1,".\\Leicestershire-Northamptonshire-2016-05-20.csv")
hh_bh=pd.read_csv(path)
yka.matchWormChart(hh_bh,"Northamptonshire", "Leicestershire")

4.5 Natwest T20 Blast -Team Batting partnerships all matches 2 teams (Class 2)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-allMatchesBetween2Teams"
path=os.path.join(dir1,"Hampshire-Sussex-allMatches.csv")
h_s_matches = pd.read_csv(path)
yka.teamBatsmenPartnershipOppnAllMatchesChart(h_s_matches,"Hampshire","Sussex",plot=True, top=4, partnershipRuns=10)

4.6 Natwest T20 Blast -Team Bowling wicket kind all matches 2 teams (Class 2)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-allMatchesBetween2Teams"
path=os.path.join(dir1,"Kent-Somerset-allMatches.csv")
k_s_matches = pd.read_csv(path)
yka.teamBowlersVsBatsmenOppnAllMatches(k_s_matches,'Kent','Somerset',plot=True,
top=5,runsConceded=10)

4.7 Natwest T20 Blast -Team Bowling scorecard all teams (Class 3)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-allMatchesAllOpposition"
path=os.path.join(dir1,"Middlesex-allMatchesAllOpposition.csv")
m_matches = pd.read_csv(path)
scorecard=yka.teamBowlingScorecardAllOppnAllMatches(m_matches,"Middlesex")
print(scorecard)
##               bowler  overs  runs  maidens  wicket   econrate
## 1             AJ Tye      8    75        0       6   9.375000
## 5         BAC Howell      8    41        0       5   5.125000
## 26         GR Napier      7    65        0       5   9.285714
## 15        DI Stevens      4    31        0       4   7.750000
## 19       DW Lawrence      6    37        0       4   6.166667
## 32       JW Dernbach      4    33        0       3   8.250000
## 7          BTJ Wheal      4    43        0       3  10.750000
## 18         DR Briggs      4    24        0       3   6.000000
## 50     RK Kleinveldt      4    24        0       3   6.000000
## 46         R McLaren      7    59        0       3   8.428571
## 47         R Rampaul      3    21        0       3   7.000000
## 34         L Gregory      6    51        0       2   8.500000
## 33   KMDN Kulasekara      2    24        0       2  12.000000
## 40          MG Hogan      3    17        0       2   5.666667
## 43        MTC Waller      4    31        0       2   7.750000
## 49        RJ Gleeson      4    20        0       2   5.000000
## 48  RE van der Merwe      5    24        0       2   4.800000
## 51  RN ten Doeschate      4    32        0       2   8.000000
## 53        S Prasanna      4    20        0       2   5.000000
## 56           SW Tait      3    17        0       2   5.666667
## 57     Shahid Afridi      8    55        0       2   6.875000
## 59  T van der Gugten      3    13        1       2   4.333333
## 64          TS Mills      3    34        0       2  11.333333
## 65          WAT Beer      4    23        0       2   5.750000
## 31          JH Davey      4    28        0       2   7.000000
## 68         ZS Ansari      3    16        0       2   5.333333
## 25         GM Andrew      3    19        0       2   6.333333
## 23          GJ Batty      6    55        0       2   9.166667
## 16          DJ Bravo      3    27        0       2   9.000000
## 41          MR Quinn      6    65        0       1  10.833333
## ..               ...    ...   ...      ...     ...        ...
## 24     GL van Buuren      7    49        0       1   7.000000
## 37           MD Hunn      3    35        0       1  11.666667
## 36        LC Norwell      6    62        0       1  10.333333
## 29       JC Tredwell      4    35        0       1   8.750000
## 35         LA Dawson      6    53        0       1   8.833333
## 62           TL Best      4    51        0       0  12.750000
## 58         T Westley      2    12        0       0   6.000000
## 4         Azharullah      3    24        0       0   8.000000
## 60     TD Groenewald      1    21        0       0  21.000000
## 61         TK Curran      4    35        0       0   8.750000
## 38         MD Taylor      3    30        0       0  10.000000
## 30        JG Myburgh      1     5        0       0   5.000000
## 8          C Overton      2    18        0       0   9.000000
## 2        Ashar Zaidi      1     5        0       0   5.000000
## 66          WR Smith      2    25        0       0  12.500000
## 28         J Overton      2    24        0       0  12.000000
## 6          BJ Taylor      1     6        0       0   6.000000
## 22          GG White      4    31        0       0   7.750000
## 55          SP Crook      1     9        0       0   9.000000
## 39        ME Claydon      4    40        0       0  10.000000
## 52         RS Bopara      4    32        0       0   8.000000
## 10           CD Nash      2    19        0       0   9.500000
## 11         CH Morris      4    36        0       0   9.000000
## 12         DA Cosker      3    32        0       0  10.666667
## 13      DA Griffiths      4    39        0       0   9.750000
## 45          PD Trego      1    11        0       0  11.000000
## 44   PA van Meekeren      2    19        0       0   9.500000
## 42          MS Crane      2    25        0       0  12.500000
## 20        FK Cowdrey      1    19        0       0  19.000000
## 14        DD Masters      2    16        0       0   8.000000
## 
## [69 rows x 6 columns]

4.8 Natwest T20 Blast -Plot wins vs losses against all teams(Class 3)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-allMatchesAllOpposition"
path=os.path.join(dir1,"Warwickshire-allMatchesAllOpposition.csv")
w_matches = pd.read_csv(path)
yka.plotWinLossByTeamAllOpposition(w_matches,'Warwickshire')

4.9 Natwest T20 Blast -Batsmen Analysis (Class 4)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-BattingBowlingDetails"
# M Klinger
name="M Klinger"
team='Gloucestershire'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsAgainstOpposition(df,name)

# CA Ingram
name="CA Ingram"
team='Glamorgan'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanCumulativeStrikeRate(df,name)

4.11 Natwest T20 Blast -Bowler analysis (Class 4)

import os
import pandas as pd
import yorkpy.analytics as yka
dir1="C:\\software\\cricket-package\\yorkpyNWB\\NWBT20-BattingBowlingDetails"
# BAC Howell
name="BAC Howell"
team='Gloucestershire'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerCumulativeAvgEconRate(df,name)

# GR Napier
name="GR Napier"
team='Essex'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketsVenue(df,name)

Note: yorkpy will work for all T20 leagues which are in YAML format as specified in Cricsheet.

You can clone/fork the latest code for yorkpy from github yorkpy

The data for IPL, Intl. T20, BBL and Natwest T20 have already been converted into pandas dataframes and saved as CSVs. You can download the converted files from Github at [allYorkpyT20Data])(https://github.com/tvganesh/allYorkpyT20Data)

Conclusion This post shows the kind of detailed analysis that can be performed with yorkpy. In fact with all the converted data it should be possible to also train a Machine Learning model, which I will probably keep for another day. You could go ahead and use the data in other innovative ways. Do keep me posted if you do!!

Important note: Do check out my other posts using yorkpy at yorkpy-posts

Have fun with yorkpy!!

See also
1. Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8
2. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
3. Hand detection through Haartraining: A hands-on approach
4.My book ‘Deep Learning from first principles:Second Edition’ now on Amazon
5. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
6. The 3rd paperback & kindle editions of my books on Cricket, now on Amazon

To see all posts click Index of posts

Pitching yorkpy … in the block hole – Part 4

A good programmer is someone who always looks both ways before crossing a one-way street.  Doug Linder

There are two ways to write error-free programs; only the third one works. Alan J. Perlis

In order to understand recursion, one must first understand recursion. Anonymous

This is the fourth and final part of my Python package yorkpy. In this part yorkpy, the python avatar of my R package yorkr see Introducing cricket package yorkr: Part 1- Beaten by sheer pace!, develops wings and is prepared for take-off. The yorkpy package uses data from Cricsheet

You can clone/download the code at Github yorkpy
This post has been published to RPubs at yorkpy-Part4
You can download this post as PDF at IPLT20-yorkpy-part4
You can download all the data used in this post and the previous post at yorkpyData

This post is a continuation of the earlier posts on yorkpy

1. Pitching yorkpy . short of good length to IPL – Part 1 In this part I included functions that convert the yaml data of IPL matches into Pandas dataframe which are then saved as CSV. This part can perform analysis of individual IPL matches. Note The converted data is available at yorkpyData
2. Pitching yorkpy.on the middle and outside off-stump to IPL – Part 2 This part included functions to create a large data frame for head-to-head confrontation between any 2IPL teams says CSK-MI, DD-KKR etc, which can be saved as CSV. Analysis is then performed on these team-2-team confrontations. Note The converted data is available at yorkpyData
3. Pitching yorkpy.swinging away from the leg stump to IPL – Part 3 The 3rd part includes the performance of any IPL team against all other IPL teams. The data can also be saved as CSV.Note The converted data is available at yorkpyData

Note: If you would like to do a similar analysis for a different set of batsman and bowlers, you can clone/download my skeleton yorkpy-template from Github (which is the R Markdown file I have used for the analysis below).

This 4th and final part includes analysis of batting and bowling performances of any IPL player. The batting and bowling details for all teams have already been converted and are available at IPLT20-Batting-BowlingDetails

This part includes the following new functions

Batsman functions

  1. batsmanRunsVsDeliveries
  2. batsmanFoursSixes
  3. batsmanDismissals
  4. batsmanRunsVsStrikeRate
  5. batsmanMovingAverage
  6. batsmanCumulativeAverageRuns
  7. batsmanCumulativeStrikeRate
  8. batsmanRunsAgainstOpposition
  9. batsmanRunsVenue

Bowler functions

  1. bowlerMeanEconomyRate
  2. bowlerMeanRunsConceded
  3. bowlerMovingAverage
  4. bowlerCumulativeAvgWickets
  5. bowlerCumulativeAvgEconRate
  6. bowlerWicketPlot
  7. bowlerWicketsAgainstOpposition
  8. bowlerWicketsVenue

A. Batsman functions

1. Get IPL Team Batting details

The function below gets the overall IPL team batting details based on the CSV files that were saved for IPL T20 matches. This is currently also available in Github at yorkpyData. The batting details of the IPL team in each match is created and a huge data frame is created by combining the batting details from each match. This can be saved as a csv file with name as for e.g. Delhi Daredevils-BattingDetails.csv.

dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
#csk_details = yka.getTeamBattingDetails("Chennai Super Kings",dir=dir1, save=True)
#dd_details = yka.getTeamBattingDetails("Delhi Daredevils",dir=dir1,save=True)
#kkr_details = yka.getTeamBattingDetails("Kolkata Knight Riders",dir=dir1,save=True)

2. Get IPL batsman details

This function is used to get the individual IPL T20 batting record for a the specified batsman of the team as in the functions below.

For the batsmen functions below I have chosen Rishabh Pant, Kane Williamson and Ambati Rayudu for the analysis as they top the batting lists. You can choose any IPL batsmen for the analysis

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
rpant=yka.getBatsmanDetails(team,name,dir=dir1)

3 Batsman Runs vs Deliveries (in IPL matches)

This functions plots the runs vs deliveries faced for batsman

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVsDeliveries(df,name)

# 2. Kane Williamson
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="KS Williamson"
team='Sunrisers Hyderabad'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVsDeliveries(df,name)

#3. Ambati Rayudu
name="AT Rayudu"
team='Mumbai Indians'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVsDeliveries(df,name)

4. Batsman fours and sixes (in IPL matches)

This plots the fours, sixes and the total runs for a batsman

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanFoursSixes(df,name)


# 2. Kane Williamson
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="KS Williamson"
team='Sunrisers Hyderabad'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanFoursSixes(df,name)

#3. Ambati Rayudu
name="AT Rayudu"
team='Mumbai Indians'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanFoursSixes(df,name)

5. Batsman dismissals (in IPL matches)

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanDismissals(df,name)

# 2. Kane Williamson
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="KS Williamson"
team='Sunrisers Hyderabad'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanDismissals(df,name)

#3. Ambati Rayudu
name="AT Rayudu"
team='Mumbai Indians'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanDismissals(df,name)

6. Batsman Runs vs Strike Rate (in IPL matches)

The plots below give the Runs vs Strike rate for batsmen

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVsStrikeRate(df,name)

# 2. Kane Williamson
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="KS Williamson"
team='Sunrisers Hyderabad'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVsStrikeRate(df,name)

#3. Ambati Rayudu
name="AT Rayudu"
team='Mumbai Indians'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVsStrikeRate(df,name)

7. Batsman Moving average of runs (in IPL matches)

The plots below compute and plot the moving average of batsmen

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanMovingAverage(df,name)

# 2. Kane Williamson
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="KS Williamson"
team='Sunrisers Hyderabad'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanMovingAverage(df,name)

#3. Ambati Rayudu
name="AT Rayudu"
team='Mumbai Indians'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanMovingAverage(df,name)

8. Batsman Cumulative average of runs (in IPL matches)

The functions below plot the cumulative average of the batsmen

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanCumulativeAverageRuns(df,name)

# 2. Kane Williamson
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="KS Williamson"
team='Sunrisers Hyderabad'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanCumulativeAverageRuns(df,name)

#3. Ambati Rayudu
name="AT Rayudu"
team='Mumbai Indians'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanCumulativeAverageRuns(df,name)

9. Batsman Cumulative Strike Rate (in IPL matches)

The functions below plot the cumulative strike rate of the batsmen

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanCumulativeStrikeRate(df,name)

# 2. Kane Williamson
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="KS Williamson"
team='Sunrisers Hyderabad'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanCumulativeStrikeRate(df,name)

#3. Ambati Rayudu
name="AT Rayudu"
team='Mumbai Indians'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanCumulativeStrikeRate(df,name)

10. Batsman performance against opposition (in IPL matches)

The plots below show how the batsmen performed against other IPL teams

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsAgainstOpposition(df,name)

# 2. Kane Williamson
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="KS Williamson"
team='Sunrisers Hyderabad'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsAgainstOpposition(df,name)

#3. Ambati Rayudu
name="AT Rayudu"
team='Mumbai Indians'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsAgainstOpposition(df,name)

11. Batsman performance at different venues (in IPL matches)

The plots below show how the batsmen performed at different venues

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Rishabh Pant
name="RR Pant"
team='Delhi Daredevils'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVenue(df,name)

# 2. Kane Williamson
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="KS Williamson"
team='Sunrisers Hyderabad'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVenue(df,name)

#3. Ambati Rayudu
name="AT Rayudu"
team='Mumbai Indians'
df=yka.getBatsmanDetails(team,name,dir=dir1)
yka.batsmanRunsVenue(df,name)

B. Bowler functions

12. Get bowling details in IPL matches

The function below gets the overall team IPL T20 bowling details based on the RData file available in IPL T20 matches. This is currently also available in Github at yorkpyData. The IPL T20 bowling details of the IPL team in each match is created, and a huge data frame is created by stacking the individual dataframes. This can be saved as a CSV file for e.g. Chennai Super Kings-BowlingDetails.csv

dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
#kkr_bowling = yka.getTeamBowlingDetails("Kolkata Knight Riders",dir=dir1,save=True)
#csk_bowling = yka.getTeamBowlingDetails("Chennai Super Kings",dir=dir1,save=True)
#kxip_bowling = yka.getTeamBowlingDetails("Kings XI Punjab",dir=dir1,save=True)

13. Get bowling details of the individual IPL bowlers

This function is used to get the individual bowling record for a specified bowler of the country as in the functions below.

The plots below deal with bowler’s performance. For this analysis I have chosen Amit Mishra, Piyush Chawla and Bhuvaneshwar Kumar for the analysis. You can chose any other IPL bowler

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Amit Mishra
name="A Mishra"
team='Delhi Daredevils'
#df=yka.getBowlerWicketDetails(team,name,dir=dir1)

14. Bowler Economy Rate (in IPL matches)

The plots below show the economy rate of the selected bowlers

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Amit Mishra
name="A Mishra"
team='Delhi Daredevils'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMeanEconomyRate(df,name)

# 2. Piyush Chawla
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="PP Chawla"
team='Kolkata Knight Riders'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMeanEconomyRate(df,name)

#3. Bhuvneshwar Kumar
name="B Kumar"
team='Sunrisers Hyderabad'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMeanEconomyRate(df,name)

15. Bowler Mean Runs conceded (in IPL matches)

The plots below show the mean runs conceded by the selected bowlers

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Amit Mishra
name="A Mishra"
team='Delhi Daredevils'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMeanRunsConceded(df,name)

# 2. Piyush Chawla
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="PP Chawla"
team='Kolkata Knight Riders'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMeanRunsConceded(df,name)

#3. Bhuvneshwar Kumar
name="B Kumar"
team='Sunrisers Hyderabad'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMeanRunsConceded(df,name)

16. Moving average of wickets for bowler (in IPL matches)

The moving average of the bowlers are plotted below

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Amit Mishra
name="A Mishra"
team='Delhi Daredevils'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMovingAverage(df,name)

# 2. Piyush Chawla
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="PP Chawla"
team='Kolkata Knight Riders'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMovingAverage(df,name)

#3. Bhuvneshwar Kumar
name="B Kumar"
team='Sunrisers Hyderabad'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerMovingAverage(df,name)

17. Cumulative average wickets for bowler (in IPL matches)

The cumulative average wickets for each bowler is computed and plotted

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Amit Mishra
name="A Mishra"
team='Delhi Daredevils'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerCumulativeAvgWickets(df,name)

# 2. Piyush Chawla
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="PP Chawla"
team='Kolkata Knight Riders'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerCumulativeAvgWickets(df,name)

#3. Bhuvneshwar Kumar
name="B Kumar"
team='Sunrisers Hyderabad'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerCumulativeAvgWickets(df,name)

18. Cumulative average economy rate for bowler (in IPL matches)

The plots below give the cumulative average economy rate for each bowler

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Amit Mishra
name="A Mishra"
team='Delhi Daredevils'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerCumulativeAvgEconRate(df,name)

# 2. Piyush Chawla
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="PP Chawla"
team='Kolkata Knight Riders'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerCumulativeAvgEconRate(df,name)

#3. Bhuvneshwar Kumar
name="B Kumar"
team='Sunrisers Hyderabad'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerCumulativeAvgEconRate(df,name)

19. Bowler wicket plot (in IPL matches)

The plots below give the over vs wickets for bowlers

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Amit Mishra
name="A Mishra"
team='Delhi Daredevils'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketPlot(df,name)

# 2. Piyush Chawla
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="PP Chawla"
team='Kolkata Knight Riders'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketPlot(df,name)

#3. Bhuvneshwar Kumar
name="B Kumar"
team='Sunrisers Hyderabad'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketPlot(df,name)

20. Bowler wicket against opposition (in IPL matches)

The performance of the bowlers against different IPL teams is shown below

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Amit Mishra
name="A Mishra"
team='Delhi Daredevils'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketsAgainstOpposition(df,name)

# 2. Piyush Chawla
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="PP Chawla"
team='Kolkata Knight Riders'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketsAgainstOpposition(df,name)

#3. Bhuvneshwar Kumar
name="B Kumar"
team='Sunrisers Hyderabad'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketsAgainstOpposition(df,name)

21. Bowler wicket in different venues (in IPL matches)

The plots below show how the bowlers perform at different venues

import pandas as pd
import os
import yorkpy.analytics as yka
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
# 1. Amit Mishra
name="A Mishra"
team='Delhi Daredevils'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketsVenue(df,name)

# 2. Piyush Chawla
dir1= "C:\\software\\cricket-package\\yorkpyIPLData\\data3"
name="PP Chawla"
team='Kolkata Knight Riders'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketsVenue(df,name)

#3. Bhuvneshwar Kumar
name="B Kumar"
team='Sunrisers Hyderabad'
df=yka.getBowlerWicketDetails(team,name,dir=dir1)
yka.bowlerWicketsVenue(df,name)

Note:You can clone/download the code at Github yorkpy

Important note: Do check out my other posts using yorkpy at yorkpy-posts

Conclusion: This concludes the python package yorkpy. Go ahead and give yorkpy a spin!

Also see
1. Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8
2. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
3. Hand detection through Haartraining: A hands-on approach
4.My book ‘Deep Learning from first principles:Second Edition’ now on Amazon
5. Big Data-1: Move into the big league:Graduate from Python to Pyspark
6. Cricpy takes a swing at the ODIs

To see all posts click Index of posts

Take 4+: Presentations on ‘Elements of Neural Networks and Deep Learning’ – Parts 1-8

“Lights, camera and … action – Take 4+!”

This post includes  a rework of all presentation of ‘Elements of Neural Networks and Deep  Learning Parts 1-8 ‘ since my earlier presentations had some missing parts, omissions and some occasional errors. So I have re-recorded all the presentations.
This series of presentation will do a deep-dive  into Deep Learning networks starting from the fundamentals. The equations required for performing learning in a L-layer Deep Learning network  are derived in detail, starting from the basics. Further, the presentations also discuss multi-class classification, regularization techniques, and gradient descent optimization methods in deep networks methods. Finally the presentations also touch on how  Deep Learning Networks can be tuned.

The corresponding implementations are available in vectorized R, Python and Octave are available in my book ‘Deep Learning from first principles:Second edition- In vectorized Python, R and Octave

1. Elements of Neural Networks and Deep Learning – Part 1
This presentation introduces Neural Networks and Deep Learning. A look at history of Neural Networks, Perceptrons and why Deep Learning networks are required and concluding with a simple toy examples of a Neural Network and how they compute. This part also includes a small digression on the basics of Machine Learning and how the algorithm learns from a data set

2. Elements of Neural Networks and Deep Learning – Part 2
This presentation takes logistic regression as an example and creates an equivalent 2 layer Neural network. The presentation also takes a look at forward & backward propagation and how the cost is minimized using gradient descent


The implementation of the discussed 2 layer Neural Network in vectorized R, Python and Octave are available in my post ‘Deep Learning from first principles in Python, R and Octave – Part 1‘

3. Elements of Neural Networks and Deep Learning – Part 3
This 3rd part, discusses a primitive neural network with an input layer, output layer and a hidden layer. The neural network uses tanh activation in the hidden layer and a sigmoid activation in the output layer. The equations for forward and backward propagation are derived.


To see the implementations for the above discussed video see my post ‘Deep Learning from first principles in Python, R and Octave – Part 2

4. Elements of Neural Network and Deep Learning – Part 4
This presentation is a continuation of my 3rd presentation in which I derived the equations for a simple 3 layer Neural Network with 1 hidden layer. In this video presentation, I discuss step-by-step the derivations for a L-Layer, multi-unit Deep Learning Network, with any activation function g(z)


The implementations of L-Layer, multi-unit Deep Learning Network in vectorized R, Python and Octave are available in my post Deep Learning from first principles in Python, R and Octave – Part 3

5. Elements of Neural Network and Deep Learning – Part 5
This presentation discusses multi-class classification using the Softmax function. The detailed derivation for the Jacobian of the Softmax is discussed, and subsequently the derivative of cross-entropy loss is also discussed in detail. Finally the final set of equations for a Neural Network with multi-class classification is derived.


The corresponding implementations in vectorized R, Python and Octave are available in the following posts
a. Deep Learning from first principles in Python, R and Octave – Part 4
b. Deep Learning from first principles in Python, R and Octave – Part 5

6. Elements of Neural Networks and Deep Learning – Part 6
This part discusses initialization methods specifically like He and Xavier. The presentation also focuses on how to prevent over-fitting using regularization. Lastly the dropout method of regularization is also discussed


The corresponding implementations in vectorized R, Python and Octave of the above discussed methods are available in my post Deep Learning from first principles in Python, R and Octave – Part 6

7. Elements of Neural Networks and Deep Learning – Part 7
This presentation introduces exponentially weighted moving average and shows how this is used in different approaches to gradient descent optimization. The key techniques discussed are learning rate decay, momentum method, rmsprop and adam.

The equivalent implementations of the gradient descent optimization techniques in R, Python and Octave can be seen in my post Deep Learning from first principles in Python, R and Octave – Part 7

8. Elements of Neural Networks and Deep Learning – Part 8
This last part touches on the method to adopt while tuning hyper-parameters in Deep Learning networks

Checkout my book ‘Deep Learning from first principles: Second Edition – In vectorized Python, R and Octave’. My book starts with the implementation of a simple 2-layer Neural Network and works its way to a generic L-Layer Deep Learning Network, with all the bells and whistles. The derivations have been discussed in detail. The code has been extensively commented and included in its entirety in the Appendix sections. My book is available on Amazon as paperback ($18.99) and in kindle version($9.99/Rs449).

This concludes this series of presentations on “Elements of Neural Networks and Deep Learning’

Also
1. My book ‘Practical Machine Learning in R and Python: Third edition’ on Amazon
2. Introducing cricpy:A python package to analyze performances of cricketers
3. Natural language processing: What would Shakespeare say?
4. Big Data-2: Move into the big league:Graduate from R to SparkR
5. Presentation on Wireless Technologies – Part 1
6. Introducing cricketr! : An R package to analyze performances of cricketers

To see all posts click Index of posts