Near Real-time Analytics of ICC Men’s T20 World Cup with GooglyPlusPlus

In my last post GooglyPlusPlus gets ready for ICC Men’s T20 World Cup, I had mentioned that GooglyPlusPlus was preparing for the big event the ICC Men’s T20 World cup. Now that the T20 World cup is underway, my Shiny app in R, GooglyPlusPlus ,will be generating near real-time analytics of matches completed the previous day. Besides the app can also do historical analysis of players, teams and matches.

The whole process is automated. A cron job will execute every day, in the morning, which will automatically download the matches of the previous day from Cricsheet, unzip them, start a pipeline which will transform and process the match data into necessary folders and finally upload the newly acquired data into my Shiny app. Hence, you will be able to access all the breathless, pulsating cricketing action in timeless, interactive plots and tables which will capture all aspects of Men’s T20 matches, namely batsman, bowler performance, match analysis, team-vs-team, team-vs-all teams besides ranking of batsmen & bowlers. Since the data is cumulative, all the analytics are historical and current.

Check out GooglyPlusPlus!!

The data for GooglyPlusPlus is taken from Cricsheet

Interest in cricket, has mushroomed in recent times around the world, with the addition of new formats which started with ODI, T20, T10, 100 ball and so on. There are leagues which host these matches at different levels around the world. While GooglyPlusPlus, provides near real-time analytics of Men’s T20 World cup, we can clearly envision a big data platform which ingests matches daily from multiple cricket formats, leagues around the world generating real-time and near real-time analytics which are essential these days to selection of teams at different levels through auctions. For more discussion on this see my posts

  1. Big Data 7: yorkr waltzes with Apache NiFi
  2. Big Data 6: The T20 Dance of Apache NiFi and yorkpy

We could imagine a Data Lake, into which are ingested data from the different cricket formats, leagues through appropriate technology connectors. Once the data is ingested, we could have data pipelines, based on Azure ADF, Apache NiFi, Apache Airflow or Amazon EMR etc., to transform, process and enhance the data, generating real-time analytics on the fly. Recent formats like T20, T10 require more urgency in strategic thinking based on scoring within limited overs, or containing batsmen from going on a rampage within the set of overs, the analytics on a fly may help the coach to modify the batting or bowling lineup at points in match. In this context see my earlier post Using Linear Programming (LP) for optimizing bowling change or batting lineup in T20 cricket

All of these are not just possible, but are likely to become reality as more and more formats, leagues and cricket data proliferate around the world.

This post, focuses on generating near-real time analytics for ICC Men’s T20 World Cup using GooglyPlusPlus. Included below, is a sampling of the analytics that you can perform for analysing the matches. In addition you can do all the analysis included in my post GooglyPlusPlus gets ready for ICC Men’s T20 World Cup

  1. Namibia-Sri Lanka-16 Oct 2022 : Match Worm graph

The opening match between Namibia vs Sri Lanka resulted in an upset. We can see this in the match worm-wicket graph below

2. Scotland vs West Indies – 17 Oct 2022: Batsmen vs Bowlers

George Munsey was the top scorer for Scotland and was instrumental in the win against WI. His performance against West Indies bowlers is shown below. Note, the charts are interactive

3. Zimbabwe vs Ireland – 17 Oct 2022 : Team Runs vs SR

Sikander Raza of Zimbabwe with 82 runs with the strike rate ~ 170

4. United Arab Emirates vs Netherlands – 16 Oct 2022: Team runs across 20 overs

UAE pipped Netherlands in the middle overs and were able to win by 1 ball and 3 wickets

5. Scotland vs Ireland – 19 Oct 2022 : Team Runs vs SR Middle overs plot

Curtis Campher snatched the game away from Scotland with his stellar performance in middle and death overs

6. UAE vs Namibia : 20 Oct 2022 : Team Wickets vs ER plot

Basoor Hameed and Zahoor Khan got 2 wickets apiece with an economy rate of ~5.00 but still they were not able to stop UAE from stealing a win

7. Overall Runs vs SR in T20 World Cup 2022

It is too early to rank the players, nevertheless in the current T20 World Cup, MP O’Dowd (Netherlands), BKG Mendis (Sri Lanka) and JN Frylinck(Namibia) are the top 3 batsmen with good runs and Strike Rate

8. Overall Wickets over ER in T20 World Cup 2022

The top 3 bowlers so far in T20 World Cup 2022 are a) BFW de Leede (Netherlands) b) PWH De Silva (Sri Lanka) c) KP Meiyappan (UAE) with a total of 7,7, and 6 wickets respectively

Note: Besides the match analysis GooglyPlusPlus also provides detailed analysis of batsmen, bowlers, matches as above, team-vs-team, team-vs-all teams, ranking of batsmen & bowlers etc. For more details see my post GooglyPlusPlus gets ready for ICC Men’s T20 World Cup

Do visit GooglyPlusPlus everyday to check out the cricketing actions of matches gone by. You can also follow me on twitter @tvganesh_85 for daily highlights.

You may also like

  1. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
  2. De-blurring revisited with Wiener filter using OpenCV
  3. Using Reinforcement Learning to solve Gridworld
  4. Deep Learning from first principles in Python, R and Octave – Part 3
  5. Getting started with Tensorflow, Keras in Python and R
  6. Big Data-4: Webserver log analysis with RDDs, Pyspark, SparkR and SparklyR
  7. Practical Machine Learning with R and Python – Part 5
  8. Cricpy takes a swing at the ODIs
  9. Video presentation on Machine Learning, Data Science, NLP and Big Data – Part 1

To see all posts click Index of posts

Big Data 7: yorkr waltzes with Apache NiFi

In this post, I construct an end-to-end Apache NiFi pipeline with my R package yorkr. This post is a mirror of my earlier post Big Data-5: kNiFing through cricket data with yorkpy based on my Python package yorkpy. The  Apache NiFi Data Pipeilne  flows all the way from the source, where the data is obtained, all the way  to target analytics output. Apache NiFi was created to automate the flow of data between systems.  NiFi dataflows enable the automated and managed flow of information between systems. This post automates the flow of data from Cricsheet, from where the zip file it is downloaded, unpacked, processed, transformed and finally T20 players are ranked.

This post uses the functions of my R package yorkr to rank IPL players. This is a example flow, of a typical Big Data pipeline where the data is ingested from many diverse source systems, transformed and then finally insights are generated. While I execute this NiFi example with my R package yorkr, in a typical Big Data pipeline where the data is huge, of the order of 100s of GB, we would be using the Hadoop ecosystem with Hive, HDFS Spark and so on. Since the data is taken from Cricsheet, which are few Megabytes, this approach would suffice. However if we hypothetically assume that there are several batches of cricket data that are being uploaded to the source, of different cricket matches happening all over the world, and the historical data exceeds several GBs, then we could use a similar Apache NiFi pattern to process the data and generate insights. If the data is was large and distributed across the Hadoop cluster , then we would need to use SparkR or SparklyR to process the data.

This is shown below pictorially

While this post displays the ranks of IPL batsmen, it is possible to create a cool dashboard using UI/UX technologies like AngularJS/ReactJS.  Take a look at my post Big Data 6: The T20 Dance of Apache NiFi and yorkpy where I create a simple dashboard of multiple analytics

My R package yorkr can handle both men’s and women’s ODI, and all formats of T20 in Cricsheet namely Intl. T20 (men’s, women’s), IPL, BBL, Natwest T20, PSL, Women’s BBL etc. To know more details about yorkr see Revitalizing R package yorkr

The code can be forked from Github at yorkrWithApacheNiFi

You can take a look at the live demo of the NiFi pipeline at yorkr waltzes with Apache NiFi

 

Basic Flow

1. Overall flow

The overall NiFi flow contains 2 Process Groups a) DownloadAnd Unpack. b) Convert and Rank IPL batsmen. While it appears that the Process Groups are disconnected, they are not. The first process group downloads the T20 zip file, unpacks the. zip file and saves the YAML files in a specific folder. The second process group monitors this folder and starts processing as soon the YAML files are available. It processes the YAML converting it into dataframes before storing it as CSV file. The next  processor then does the actual ranking of the batsmen before writing the output into IPLrank.txt

1.1 DownloadAndUnpack Process Group

This process group is shown below

 

1.1.1 GetT20Data

The GetT20Data Processor downloads the zip file given the URL

The ${T20data} variable points to the specific T20 format that needs to be downloaded. I have set this to https://cricsheet.org/downloads/ipl.zip. This could be set any other data set. In fact we could have parallel data flows for different T20/ Sports data sets and generate

1.1.2 SaveUnpackedData

This processor stores the YAML files in a predetermined folder, so that the data can be picked up  by the 2nd Process Group for processing

 

1.2 ProcessAndRankT20Players Process Group

This is the second process group which converts the YAML files to pandas dataframes before storing them as. CSV files. The RankIPLPlayers will then read all the CSV files, stack them and then proceed to rank the IPL players. The Process Group is shown below

 

1.2.1 ListFile and FetchFile Processors

The left 2 Processors ListFile and FetchFile get all the YAML files from the folder and pass it to the next processor

1.2.2 convertYaml2DataFrame Processor

The convertYaml2DataFrame Processor uses the ExecuteStreamCommand which call Rscript. The Rscript invoked the yorkr function convertYaml2DataframeT20() as shown below

 

I also use a 16 concurrent tasks to convert 16 different flowfiles at once

 

library(yorkr)
args<-commandArgs(TRUE)
convertYaml2RDataframeT20(args[1], args[2], args[3])

1.2.3 MergeContent Processor

This processor’s only job is to trigger the rankIPLPlayers when all the FlowFiles have merged into 1 file.

1.2.4 RankT20Players

This processor is an ExecuteStreamCommand Processor that executes a Rscript which invokes a yorrkr function rankIPLT20Batsmen()

library(yorkr)
args<-commandArgs(TRUE)

rankIPLBatsmen(args[1],args[2],args[3])

1.2.5 OutputRankofT20Player Processor

This processor writes the generated rank to an output file.

 

1.3 Final Ranking of IPL T20 players

The Nodejs based web server picks up this file and displays on the web page the final ranks (the code is based on a good youtube for reading from file)

[1] "Chennai Super Kings"
[1] "Deccan Chargers"
[1] "Delhi Daredevils"
[1] "Kings XI Punjab"
[1] "Kochi Tuskers Kerala"
[1] "Kolkata Knight Riders"
[1] "Mumbai Indians"
[1] "Pune Warriors"
[1] "Rajasthan Royals"
[1] "Royal Challengers Bangalore"
[1] "Sunrisers Hyderabad"
[1] "Gujarat Lions"
[1] "Rising Pune Supergiants"
[1] "Chennai Super Kings-BattingDetails.RData"
[1] "Deccan Chargers-BattingDetails.RData"
[1] "Delhi Daredevils-BattingDetails.RData"
[1] "Kings XI Punjab-BattingDetails.RData"
[1] "Kochi Tuskers Kerala-BattingDetails.RData"
[1] "Kolkata Knight Riders-BattingDetails.RData"
[1] "Mumbai Indians-BattingDetails.RData"
[1] "Pune Warriors-BattingDetails.RData"
[1] "Rajasthan Royals-BattingDetails.RData"
[1] "Royal Challengers Bangalore-BattingDetails.RData"
[1] "Sunrisers Hyderabad-BattingDetails.RData"
[1] "Gujarat Lions-BattingDetails.RData"
[1] "Rising Pune Supergiants-BattingDetails.RData"
# A tibble: 429 x 4
   batsman     matches meanRuns meanSR
   <chr>         <int>    <dbl>  <dbl>
 1 DA Warner       130     37.9   128.
 2 LMP Simmons      29     37.2   106.
 3 CH Gayle        125     36.2   134.
 4 HM Amla          16     36.1   108.
 5 ML Hayden        30     35.9   129.
 6 SE Marsh         67     35.9   120.
 7 RR Pant          39     35.3   135.
 8 MEK Hussey       59     33.8   105.
 9 KL Rahul         59     33.5   128.
10 MN van Wyk        5     33.4   112.
# … with 419 more rows

 

Conclusion

This post demonstrated an end-to-end pipeline with Apache NiFi and R package yorkr. You can this pipeline and generated different analytics using the various functions of yorkr and display them on a dashboard.

Hope you enjoyed with post!

 

See also
1. The mechanics of Convolutional Neural Networks in Tensorflow and Keras
2. Deep Learning from first principles in Python, R and Octave – Part 7
3. Fun simulation of a Chain in Android
4. Natural language processing: What would Shakespeare say?
5. TWS-4: Gossip protocol: Epidemics and rumors to the rescue
6. Cricketr learns new tricks : Performs fine-grained analysis of players
7. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
8. Practical Machine Learning with R and Python – Part 5
9. Cricpy adds team analytics to its arsenal!!

To see posts click Index of posts

Big Data 6: The T20 Dance of Apache NiFi and yorkpy

“I don’t count my sit-ups. I only start counting once it starts hurting. ”

Muhammad Ali

“Hard work beats talent when talent doesn’t work hard.”

Tim Notke

In my previous post Big Data 5: kNiFI-ing through cricket data with Apache NiFi and yorkpy, I created a Big Data Pipeline that takes raw data in YAML format from a Cricsheet to processing and ranking IPL T20 players. In that post I had mentioned that we could create a similar pipeline to create a real time dashboard of IPL Analytics. I could have have done this but I needed to know how to create a Web UI. After digging and poking around, I have been able to create a simple Web UI running off Apache Web server. This UI uses basic JQuery and CSS to display a real time IPL T20 dashboard. As in my previous post, this is an end-2-end Big Data pipeline which can handle large data sets at scheduled times, process them and generate real time dashboards.

We could imagine an inter-galactic T20 championship league where T20 data comes in every hour or sooner and we need to perform analytics to see if us earthlings are any better than people with pointy heads  or little green men. The NiFi pipeline could be used as-is, however the yorkpy package would have to be rewritten in Pyspark. That is in another eon, though.

My package yorkpy has around ~45+ functions which fall in the following main categories

1. Pitching yorkpy . short of good length to IPL – Part 1 :Class 1: This includes functions that convert the yaml data of IPL matches into Pandas dataframe which are then saved as CSV. This part can perform analysis of individual IPL matches.
2. Pitching yorkpy.on the middle and outside off-stump to IPL – Part 2 :Class 2:This part includes functions to create a large data frame for head-to-head confrontation between any 2IPL teams says CSK-MI, DD-KKR etc, which can be saved as CSV. Analysis is then performed on these team-2-team confrontations.
3. Pitching yorkpy.swinging away from the leg stump to IPL – Part 3 Class 3:The 3rd part includes the performance of any IPL team against all other IPL teams. The data can also be saved as CSV.
4. Pitching yorkpy … in the block hole – Part 4 :Class 4: This part performs analysis of individual IPL batsmen and bowlers

 

Watch the live demo of the end-2-end NiFi pipeline at ‘The T20 Dance

You can download the NiFi template and associated code from Github at  T20 Dance

The Apache NiFi Pipeline is shown below

1. T20 Dance – Overall NiFi Pipeline

 

There are 5 process groups

2. ListAndConvertYaml2DataFrames

This post starts with having the YAML files downloaded and unpacked from Cricsheet.  The individual YAML files are converted into Pandas dataframes and saved as CSV. A concurrency of 12 is used to increase performance and process YAML files in parallel. The processor MergeContent creates a merged content to signal the completion of conversion and triggers the other Process Groups through a funnel.

 

3. Analyse individual IPL T20 matches

This Process Group ‘Analyse T20 matches’  used the yorkpy’s Class 1 functions which can perform analysis of individual IPL T20 matches. The matchWorm() and matchScorecard() functions are used, through any other function could have been used. The Process Group is shown below

 

4. Analyse performance of an IPL team in all matches against another IPL team

This Process Group ‘Analyse performance of IPL team in all matched against another IPL team‘ does analysis in all matches between any 2 IPL teams (Class 2) as shown below

5. Analyse performance of IPL team in all matches against all other IPL teams

This uses Class 3 functions. Individual data sets for each IPL team versus all other IPL teams is created before Class 3 yorkpy functions are invoked. This is included below

6. Analyse performances of IPL batsmen and bowlers

This Process Group uses Class 4 yorkpy functions. The match CSV files are processed to get batting and bowling details before calling the individual functions as shown below

 

7. IPL T20 Dashboard

The IPL T20 Dashboard is shown

 

Conclusion

This NiFI pipeline was done for IPL T20 however, it could be done for any T20 format like Intl T20, BBL, Natwest etc which are posted in Cricsheet. Also, only a subset of the yorkpy functions were used. There is a much wider variety of functions available.

Hope the T20 dance got your foot a-tapping!

 

You may also like
1. A primer on Qubits, Quantum gates and Quantum Operations
2.Computer Vision: Ramblings on derivatives, histograms and contours
3.Deep Learning from first principles in Python, R and Octave – Part 6
4.A Bluemix recipe with MongoDB and Node.js
5.Practical Machine Learning with R and Python – Part 4
6.Simulating the domino effect in Android using Box2D and AndEngine

To see all posts click Index of posts

Big Data-5: kNiFi-ing through cricket data with yorkpy

“The temptation to form premature theories upon insufficient data is the bane of our profession.”

                              Sherlock Holmes in the Valley of fear by Arthur Conan Doyle

“If we have data, let’s look at data. If all we have are opinions, let’s go with mine.”

                              Jim Barksdale, former CEO Netscape 

In this post I use  Apache NiFi Dataflow Pipeline along with my Python package yorkpy to crunch through cricket data from Cricsheet. The Data Pipelne  flows all the way from the source  to target analytics output. Apache NiFi was created to automate the flow of data between systems.  NiFi dataflows enable the automated and managed flow of information between systems. This post automates the flow of data from Cricsheet, from where the zip file it is downloaded, unpacked, processed, transformed and finally T20 players are ranked.

While this is a straight forward example of what can be done, this pattern can be applied to real Big Data systems. For example hypothetically, we could consider that we get several parallel streams of  cricket data or for that matter any sports related data. There could be parallel Data flow pipelines that get the data from the sources. This would then be  followed by data transformation modules and finally a module for generating analytics. At the other end a UI based on AngularJS or ReactJS could display the results in a cool and awesome way.

Incidentally, the NiFi pipeline that I discuss in this post, is a simplistic example, and does not use the Big Data stack like HDFS, Hive, Spark etc. Nevertheless, the pattern used, has all the modules for a Big Data pipeline namely ingestion, unpacking, transformation and finally analytics. This NiF pipeline demonstrates the flow using the regular file system of Mac and my python based package yorkpy. The concepts mentioned could be used in a real Big Data scenario which has much fatter pipes of data coming. If  this was the case the NiFi pipeline would utilize  HDFS/Hive for storing the ingested data and Pyspark/Scala for the transformation and analytics and other related technologies.

A pictorial representation is given below

In the diagram above each of the vertical boxes could be any technology from the ever proliferating Big Data stack namely HDFS, Hive, Spark, Sqoop, Kafka, Impala and so on.  Such a dataflow automation could be created when any big sporting event happens, as long as the data generated large, and there is a need for dynamic and automated reporting. The UI could be based on AngularJS/ReactJS and could display analytical tables and charts.

This post demonstrates one such scenario in which IPL T20 data is downloaded from Cricsheet site, unpacked and stored in a specific directory. This dataflow automation is based on my yorkpy package. To know more about the yorkpy package  see Pitching yorkpy … short of good length to IPL – Part 1  and the associated parts. The zip file, from Cricsheet, contains individual IPL T20 matches in YAML format. The convertYaml2DataframeT20() function is used to convert the YAML files into Pandas dataframes before storing them as CSV files. After this done, the function rankIPLT20batting() function is used to perform the overall ranking of the T20 players. My yorkpy Python package has about ~ 50+ functions that perform various analytics on any T20 data for e.g it has the following classes of functions

  • analyze T20 matches
  • analyze performance of a T20 team in all matches against another T20 team
  • analyze performance of a T20 team against all other T20 teams
  • analyze performance of T20 batsman and bowlers
  • rank T20 batsmen and bowlers

The functions of yorkpy generate tables or charts. While this post demonstrates one scenario, we could use any of the yorkpy T20 functions, generate the output and display on a widget in the UI display, created with cool technologies like AngularJS/ReactJS,  possibly in near real time as data keeps coming in.,

To use yorkpy with NiFI the following packages have to be installed in your environment

-pip install yorkpy
-pip install pyyaml
-pip install pandas
-yum install python-devel (equivalent in Windows)
-pip install matplotlib
-pip install seaborn
-pip install sklearn
-pip install datetime

I have created a video of the NiFi Pipeline with the real dataflow fro source to the ranked IPL T20 batsmen. Take a look at RankingT20PlayersWithNiFiYorkpy

You can clone/fork the NiFi template from rankT20withNiFiYorkpy

The NiFi Data Flow Automation is shown below

1. Overall flow

The overall NiFi flow contains 2 Process Groups a) DownloadAnd Unpack. b) Convert and Rank IPL batsmen. While it appears that the Process Groups are disconnected, they are not. The first process group downloads the T20 zip file, unpacks the. zip file and saves the YAML files in a specific folder. The second process group monitors this folder and starts processing as soon the YAML files are available. It processes the YAML converting it into dataframes before storing it as CSV file. The next  processor then does the actual ranking of the batsmen before writing the output into IPLrank.txt

1.1 DownloadAndUnpack Process Group

This process group is shown below

1.1.1 GetT20Data

The GetT20Data Processor downloads the zip file given the URL

The ${T20data} variable points to the specific T20 format that needs to be downloaded. I have set this to https://cricsheet.org/downloads/ipl.zip. This could be set any other data set. In fact we could have parallel data flows for different T20/ Sports data sets and generate

1.1.2 SaveUnpackedData

This processor stores the YAML files in a predetermined folder, so that the data can be picked up  by the 2nd Process Group for processing

1.2 ProcessAndRankT20Players Process Group

This is the second process group which converts the YAML files to pandas dataframes before storing them as. CSV files. The RankIPLPlayers will then read all the CSV files, stack them and then proceed to rank the IPL players. The Process Group is shown below

1.2.1 ListFile and FetchFile Processors

The left 2 Processors ListFile and FetchFile get all the YAML files from the folder and pass it to the next processor

1.2.2 convertYaml2DataFrame Processor

The convertYaml2DataFrame Processor uses the ExecuteStreamCommand which call a python script. The Python script invoked the yorkpy function convertYaml2Dataframe() as shown below

The ${convertYaml2Dataframe} variable points to the python file below which invoked the yorkpy function yka.convertYaml2PandasDataframeT20()

import yorkpy.analytics as yka
import argparse
parser = argparse.ArgumentParser(description='convert')
parser.add_argument("yamlFile",help="YAML File")
args=parser.parse_args()
yamlFile=args.yamlFile
yka.convertYaml2PandasDataframeT20(yamlFile,"/Users/tvganesh/backup/software/nifi/ipl","/Users/tvganesh/backup/software/nifi/ipldata")

This function takes as input $filename which comes from FetchFile processor which is a FlowFile. So I have added a concurrency of 8  to handle upto 8 Flowfiles at a time. The thumb rule as I read on the internet is 2x, 4x the number of cores of your system. Since I have an 8 core Mac, I could possibly have gone ~ 30 concurrent threads. Also the number of concurrent threads is less when the flow is run in a Oracle Box VirtualMachine. Box since a vCore < actual Core

The scheduling tab is as below

Here are the 8 concurrent Python threads on Mac at bottom right… (pretty cool!)

I have not fully tested how latency vs throughput slider changes, affects the performance.

1.2.3 MergeContent Processor

This processor’s only job is to trigger the rankIPLPlayers when all the FlowFiles have merged into 1 file.

1.2.4 RankT20Players

This processor is an ExecuteStreamCommand Processor that executes a Python script which invokes a yorkpy function rankIPLT20Batting()

import yorkpy.analytics as yka
rank=yka.rankIPLT20Batting("/Users/tvganesh/backup/software/nifi/ipldata")
print(rank.head(15))

1.2.5 OutputRankofT20Player Processor

This processor writes the generated rank to an output file.

1.3 Final Ranking of IPL T20 players

The Nodejs based web server picks up this file and displays on the web page the final ranks (the code is based on a good youtube for reading from file)

2. Final thoughts

As I have mentioned above though the above NiFi Cricket Dataflow automation does not use the Hadoop ecosystem, the pattern used is valid and can be used with some customization in Big Data flows as parallel stream. I could have also done this on Oracle VirtualBox but I thought since the code is based on Python and Pandas there is no real advantage of running on the VirtualBox.  GIve the NiFi flow a shot. Have fun!!!

Also see
1.My book ‘Deep Learning from first Practical Machine Learning with R and Python – Part 5
Edition’ now on Amazon

2. Introducing QCSimulator: A 5-qubit quantum computing simulator in R
3.De-blurring revisited with Wiener filter using OpenCV
4. Practical Machine Learning with R and Python – Part 5
5. Natural language processing: What would Shakespeare say?
6.Getting started with Tensorflow, Keras in Python and R
7.Revisiting World Bank data analysis with WDI and gVisMotionChart

To see all posts click Index of posts