Natural language processing: What would Shakespeare say?

1Here is a scene from  Christopher Nolan’s classic movie Interstellar. In this scene  Cooper, a crew member of the Endurance spaceship which is on its way to 3 distant planets via a wormhole, is conversing with TARS which is one of  US Marine Corps former robots some year in the future.

TARS (flippantly): “Everybody good? Plenty of slaves for my robot colony?”
TARS: [as Cooper repairs him] Settings. General settings. Security settings.
TARS: Honesty, new setting: ninety-five percent.
TARS: Confirmed. Additional settings.
Cooper: Humor, seventy-five percent.
TARS: Confirmed. Self-destruct sequence in T minus 10, 9…
Cooper: Let’s make that sixty percent.
TARS: Sixty percent, confirmed. Knock knock.
Cooper: You want fifty-five?

Natural Language has been an area of serious research for several decades ever since Alan Turing in 1950 proposed a test in which a human evaluator would simultaneously judge natural language conversations between another human and a machine, that is designed to generate human-like responses, behind a closed doors. If the responses of the human and machine were indistinguishable then we can say that the machine has passed the Turing test signifying machine intelligence.

How cool would it be if we could  converse with a machines using Natural Language  with all the subtleties of language including irony, sarcasm and humor? While considerable progress has been made in  Natural Language Processing for e.g. Watson, Siri and Cortana  the ability to handle nuances like humor, sarcasm is probably many years away.

This post looks at one aspect of Natural Language Processing, particularly in dealing with the ability to predict the next word(s) given a word or phrase.

This title of this post should really be ‘Natural language Processing: What would Shakespeare say, and what would you say’ because this post includes two interactive apps that can predict the next word

a) The first app given a (Shakespearean) phrase will predict the most likely word that Shakespeare would have said
Try the Shiny app : What would Shakespeare have said?

b) The second app will, given a regular phrase  predict the next word(s)  in regular day to day English usage
Try the Shiny app: What would you say?

Checkout my book ‘Deep Learning from first principles- In vectorized Python, R and Octave’.  My book is available on Amazon  as paperback ($16.99) and in kindle version($6.65/Rs449).

You may also like my companion book “Practical Machine Learning with R and Python:Second Edition- Machine Learning in stereo” available in Amazon in paperback($10.99) and Kindle($7.99/Rs449) versions.

Natural Language Processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages. NLP encompasses many areas from computer science  besides inputs from the domain of  linguistics , psychology, information theory, mathematics and statistics

 However NLP is a difficult domain as each language has its own quirkiness and ambiguities,  and English is no different. Let us take the following 2 sentences

Time flies like an arrow.
Fruit flies like a banana.

Clearly the 2 sentences mean  entirely different things when referencing  the words ‘flies like’. The English language is filled with many such ambiguous constructions

There have been 2 main approaches to Natural Language Processing – The rationalist approach and the empiricist’s approach. The empiricists  approached natural language as a data driven problem based on statistics while the rationalist school led by Noam Chomsky, the linguist,  strongly believed that sentence structure should be analyzed at a deeper level than mere surface statistics.

In his book Syntactic Structures, Chomsky introduces a famous example of his criticism of finite-state probabilistic models. He cites 2 sentences  (a) ‘colorless green ideas sleep furiously’  (b) ‘furiously sleep ideas green colorless’.  Chomsky’s contention is that while neither sentence or  any of its parts, have ever occurred in the past linguistic experience of  English it can be easily inferred that   (a) is grammatical, while (b) is not. Chomsky argument is that sentence structure is critical to Natural Language processing of any kind. Here is a good post by Peter Norvig ‘On Chomsky and the two cultures of statistical learning’. In fact,  from 1950 to the 1980s the empiricists approach fell out of favor while reasonable progress was made based on rationalist approach to NLP.

The return of the empiricists
But thanks to great strides in processing power and the significant drop in hardware the empiricists approach to Natural Language Processing  made a comeback in the mid 1980s.  The use of probabilistic language models combined with the increase in the  power of processing saw the rise of the empiricists again. Also there had been significant improvement in machine learning algorithms which allowed the use of the computing resources more efficiently.

In this post I showcase 2 Shiny apps written in R that predict the next word given a phrase using  statistical approaches, belonging to the empiricist school of thought. The 1st one will try to predict what Shakespeare would have said  given a phrase (Shakespearean or otherwise)  and the 2nd is a regular app that will predict what we would say in our regular day to day conversation. These apps will predict the next word as you keep typing in each word.

In NLP the first step is a to build a language model. In order to  build a language model the program ingests a large corpora of documents.  For the a) Shakespearean app, the corpus is the “Complete Works of Shakespeare“.  This is also available in Free ebooks by Project Gutenberg but you will have to do some cleaning and tokenzing before using it. For the b) regular English next word predicting app the corpus is composed of several hundred MBs of tweets, news items and blogs.

Once the corpus is ingested the software then creates a n-gram model. A 1-gram model is representation of all unique single words and their counts. Similarly a bigram model is representation of all 2 words and their counts found in the corpus. Similar we can have trigram, quadgram and n-gram as required. Typically language models don’t go beyond 5-gram as the processing power needed increases for these larger n-gram models.

The probability of a sentence can be determined  using the chain rule. This is shown for the bigram model  below where P(s) is the probability of a sentence ‘s’
P( The quick brown fox jumped) =
P(The) P(quick|The) P(brown|The quick) * P(fox||The quick brown) *P(jumped|The quick brown fox)
where BOS -> is the beginning of the sentence and

P(quick|The) – The probability of the word being ‘quick’ given that the previous word was ‘The’. This probability can be approximated based on Markov’s chain rule which allows that the we can compute the conditional probability

of a word based on a couple of its preceding words. Hence this allows this approximation as follows
P(w{_{i}}|w_{1}w_{2}w_{3}..w_{i-1}) = P(w{_{i}}|w_{i-1})

The Maximum Likelihood Estimate (MLE) is given as follows for a bigram
P_{MLE}(w_{i}|w_{i-1}) = count(w_{i-1},w_{i})/count(w_{i-1})
P_{MLE}(w_{i}|w_{i-1}) = c(w_{i-1},w_{i})/c(w_{i-1})

Hence for a corpus
We can calculate the maximum likelihood estimates of a given word from its previous word. This computation of the MLE can be extended to the trigram and the quadgram

For a trigram
P(w_{i}|w_{i-1}w_{i-2}) = c(w_{i-2}w_{i-1},w_{i})/c(w_{i-2}w_{i-1})

Smoothing techniques
The MLE estimates for many bigrams and trigrams will be 0, because we may have not have yet seen certain combinations. But the fact that we have not seen these combinations in the corpus should not  mean that they could never occur, So the MLE for the bigrams, trigrams etc have be smoothed so that it does not have a 0 conditional probability. One such method is to use ‘Laplace smoothing’. This smoothing tries to steal from the probability mass of words that occur in the corpus and re-distribute it to the words that do not occur in the corpus. In a way this equivalent to probability mass stealing. This is the simplest smoothing technique and is also known as the ‘add +1’ smoothing technique and requires that 1 be added to all counts

So the  MLE below
P_{MLE}(w_{i}|w_{i-1}) = c(w_{i-1},c_{i})/c(w_{i-1})

With the add +1 smoothing this becomes
P_{MLE}(w_{i}|w_{i-1}) = c(w_{i-1},c_{i})+1/c(w_{i-1})+V

This smoothing is done for bigram, trigam and quadgram.  Smoothing is usually used with an associated technique called ‘backoff’. If the phrase is not found in a n-gram model then we need to backoff to a n-1 gram model. For e.g. a lookup will be done in quadgrams, if not found the algorithm will backoff to trigram,  bigram and finally to unigram.

Hence if we had the phrase
“on my way”

The smoothed MLE for a quadgram will be checked for the next word. If this is not found this is backed of my searching smoothed MLEs for trigrams for the phrase ‘my way’ and if this not found search the bigram for the next word to ‘way’.

One such method is the Katz backoff which is given by which is based on the following method
Bigrams with nonzero count are discounted according to discount ratio d_{r} (i.e. the unigram model).
d_{r} = r^{*}/r

Count mass subtracted from nonzero counts is redistributed among the zero-count bigrams according to next lower-order distribution

A better performance is obtained with the Kneser-Ney algorithm which computes the continuation probability of words. The Kneser-Ney algorithm is included below
P_{\mathit{KN}}(w_i \mid w_{i-1}) = \dfrac{\max(c(w_{i-1} w_i) - \delta, 0)}{\sum_{w'} c(w_{i-1} w')} + \lambda \dfrac{\left| \{ w_{i-1} : c(w_{i-1}, w_i) > 0 \} \right|}{\left| \{ w_{j-1} : c(w_{j-1},w_j) > 0\} \right|}

\lambda(w_{i-1}) = \dfrac{\delta}{c(w_{i-1})} \left| \{w' : c(w_{i-1}, w') > 0\} \right|

This post was inspired by the final Capstone Project in which I had to create a Shiny app for predicting the next word as a part of  Data Science Specialization conducted by John Hopkins University, Bloomberg School of Public health at Coursera.

I further extended this concept  where I try to predict what Shakespeare would have said.  For this I ingest the Complete Works of Shakespeare which is the corpus. The +1 Add smoothing with Katz backoff and the Kneser-Ney algorithm on the unigram, bigram, trigram and quadgrams were then implemented.

Note: This post  in no way tries to belittle the genius of Shakespeare.  From the table below it can be seen that our day to day conversation has approximately 210K, 181K & 65K unique bigrams, trigrams and quadgrams. On the other hand, Shakespearean literature has 271K, 505K, & 517K bigrams, trigrams and quadgrams. It can be seen that Shakespeare had a rich and complex set of word combination.

Not surprisingly the computation of the conditional and continuation probabilities for the Shakespearean literature is orders of magnitude larger.
Here is a small table as comparison

This implementation was done entirely using R. The main R packages used for this implementation were tm,Rweka,dplyr. Here is a slide deck on the the implementation details of the apps and key  lessons learnt: PredictNextWord
Unfortunately I will not be able to include the implementation details as I am bound by The Coursera Honor Code.

If you have not already given the apps a try do give them a try
Try the Shiny apps
What would Shakespeare say?
What would you say?

c. (Week 2)

You may like
1. My book ‘Practical Machine Learning in R and Python: Second edition’ on Amazon
2. Introducing cricketr! : An R package to analyze performances of cricketers
3. cricketr digs the Ashes!
4. A peek into literacy in India: Statistical Learning with R
5. A crime map of India in R – Crimes against women
6. Analyzing cricket’s batting legends – Through the mirage with R
7. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid

Also see
1. Re-working the Lucy-Richardson Algorithm in OpenCV
2.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
3.  Bend it like Bluemix, MongoDB with autoscaling – Part 2
4. TWS-4: Gossip protocol: Epidemics and rumors to the rescue
5. Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data
6. Deblurring with OpenCV:Weiner filter reloaded

What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1

Published in IBM developerWorks ‘Whats up Watson? Using Watson QAAPI with Bluemix and NodeExpress

In this post I take the famed IBM Watson through the paces (yes, that’s right!, this post is about  using the same  IBM  Watson which trounced 2 human Jeopardy titans in a classic duel in 2011).  IBM’s Watson (see  What is Watson?) is capable of understanding the nuances of the English language and heralds a new era in the domain of cognitive computing. IBM Bluemix now includes 8 services from Watson ranging from Concept Expansion, Language Identification, Machine Translation, Question-Answer etc. For more information on Watson’s QAAPI and the many services that have been included in Bluemix please see Watson Services.

In this article I create an application on IBM Bluemix and use Watson’s QAAPI (Question-Answer API) as a service to the Bluemix application. For the application I have used NodeExpress to create a Webserver and post the REST queries to Watson.  Jade is used format the results of Watson’s Response.

In this current release of Bluemix Watson comes with a corpus of medical facts. In other words Watson has been made to ingest medical documents in multiple formats (doc, pdf, html, text  etc) and the user can pose medical questions to Dr.Watson. In its current avatar, its medical diet consisted of dishes from (CDC Health Topics, National Heart, Lung, and Blood Institute (NHLBI) National Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK), National Institute of Neurological Disorders and Stroke (NINDS), (physician data query) etc.)

Try out my Watson app  on Bluemix here –  Whats up Watson?

To get down to Watson QAAPI business with Bluemix app you can fork the code from Devops at whatsup. This can then be downloaded to your local machine. You can also clone the code from GitHub at whatsup

  1. To get started go to the directory where you have cloned the code for Whatsup app

2.Push the app to Bluemix using Cloud Foundry’s ‘cf’ commands as shown below

cf login -a

3. Next push the app to Bluemix
cf push whatsup –p . –m 512M

In the Bluemix dashboard you should see ‘whatsup’ app running. Now click ‘Add Service’ and under Watson add ‘Question Answer’


Add Qatson QAAPI


You will be prompted with ‘Restage Application’. Click ‘Ok’. Once you have the app running you should be able to get started with Doc Watson.

The code for this Bluemix app with QAAPI as a Service is based on the following article Examples using the Question and Answer API

  1. Here’s a look at the code for the Bluemix & Watson app.

In this Bluemix app I show the different types of Questions we can ask Watson and the responses we get from it. The app has a route for each of the different types of questions and options

a. Simple Synchronous Query: Post a simple synchronous query to Watson
This is the simplest query that we can pose to Watson. Here we need to just include the text of the question and the also a Sync Timeout. The Sync Timeout denotes the time client will wait for responses from the Watson service
// Ask Watson a simple synchronous query


b. Evidence based question: Ask Watson to respond to evidence given to it
Ask Watson for responses based on evidence given like medical conditions etc. This would be a used for diagnostic purposes I would presume.
// Ask Watson for responses based on evidence provided

c. Request for a specified set of answers to a question: Ask Dr. Watson to give a specified number of responses to a question
// Ask Watson to provide specified number of responses to a query

d. Get a formatted response to a question: Ask Dr. Watson to format the response to the question
// Get a formatted response from Watson for a query

  1. To get started with Watson we would need to connect the Bluemix app to the Watson’s QAAPI as a service by parsing the environment variable. This is shown below

//Get the VCAP environment variables to connect Watson service to the Bluemix application

o o o
if (process.env.VCAP_SERVICES) {
var VCAP_SERVICES = JSON.parse(process.env.VCAP_SERVICES);
// retrieve the credential information from VCAP_SERVICES for Watson QAAPI
var hostname   = VCAP_SERVICES["Watson QAAPI-0.1"][0].name;
var passwd = VCAP_SERVICES["Watson QAAPI-0.1"][0].credentials.password;
var userid = VCAP_SERVICES["Watson QAAPI-0.1"][0].credentials.userid;
var watson_url = VCAP_SERVICES["Watson QAAPI-0.1"][0].credentials.url;

Next we need to format the header for the POST request

var parts = url.parse(watson_url);
// Create the request options to POST our question to Watson
var options = {host: parts.hostname,
port: 443,
path: parts.pathname,
method: 'POST',
headers: headers,
rejectUnauthorized: false, // ignore certificates
requestCert: true,
agent: false};

The question that is to be asked of Watson needs to be formatted appropriately based on the input received in the appropriate form (for e.g. simplesync.jade)

// Get the values from the form
var syncTimeout = req.body.timeout;
var query = req.body.query;
// create the Question text to ask Watson
var question = {question : {questionText :query }};
var evidence = {"evidenceRequest":{"items":1,"profile":"yes"}};
// Set the POST body and send to Watson

Now you POST the Question to Dr. Watson and receive the stream of response using Node.js’ .on(‘data’,) & .on(‘end’) shown below

      var req = https.request(options, function(result) {
// Retrieve and return the result back to the client
result.on(“data”, function(chunk) {
output += chunk;

result.on('end', function(chunk) {
// Capture Watson's response in output. Parse Watson's answer for the fields
var results = JSON.parse(output);
'answer', {

The results are parsed and formatted displayed using Jade. For the Jade templates I have used a combination of Jade and inline HTML tags (Jade can occasionally be very stubborn and make you sweat quite a bit. So I took the easier route of inline HTML tagging. In a later post I will try out CSS stylesheets to format the response.)

Included below is the part of the jade template with inline HTML tagging

o o o
<h2 style="color:blueviolet">  Question Details </style> </h2>
for result in results.question.qclasslist
p <font color="blueviolet">  Value   = <font color="black "> #{result.value} </font>
p <font color="blueviolet">  Focuslist  </font> = <font color="black "> #{results.question.focuslist[0].value} </font>
// The 'How' query's response does not include latlist. Hence conditional added.
if latlist
p <font color="blueviolet">  Latlist  </font> = <font color="black "> #{results.question.latlist[0].value} </font>

o o o

Now that the code is all set you can fire the Watson. To do this click on the route


Click the route and ‘Lo and behold’ you should see Watson ready and raring to go.


As the display shows there are 4 different Question-Answer options that there is for Watson QAAPI

Simple Synchronous Question-Answer
This option is the simplest option. Here we need to just include the text of the question and the also a Sync Timeout. The question can be any medical related question as Watson in its current Bluemix avatar has a medical corpus

For e.g.1) What is carotid artery disease?

2) What is the difference between hepatitis A and hepatitis B etc.

The Sync Timeout parameter specifies the number of seconds the QAAPI client will wait for the streaming response from Watson. An example question and Watson’s response are included below

When we click Submit Watson spews out the following response


Evidence based response:

In this mode of operation, questions can be posed to Watson based on observed evidence. Watson will output all relevant information based on the evidence provided. As seen in the output Watson provides a “confidence factor” for each of its response


Watson gives response with appropriate confidence values based on the given evidence


Question with specified number of responses
In this option we can ask Watson to provide us with at least ‘n’ items in its response. If it cannot provide as many items it will give an error notification


This will bring up the following screen where the question asked is “What is the treatment for Down’s syndrome?” and Items as 3.

Watson gives 3 items in the response as shown below

Formatted Response: Here Watson gives a formatted response to question asked. Since I had already formatted the response using Jade it does not do extra formatting as seen in the screen shot.

Updated synonym based response. In this response we can change the synonym list based on which Watson will search its medical corpus and modify its response. The synonym list for the the question “What is fever?” is shown below. We can turn off synonyms by setting to ‘false’ and possibly adding other synonyms for the search

This part of the code has not been included in this post and is left as an exercise to the reader 🙂

As mentioned before you can fork and clone the code from IBM devops at whatsup or clone from GitHub at whatsup

There are many sections to Watson’s answer which cannot be included in this post as the amount of information is large and really needs to be pared to pick out important details. I am including small sections from each part of Watson’s response below to the question “How is carotid artery disease treated/”

I will follow up this post with another post where I will take a closer look at Watson’s response which has many parts to it

– Question Details

– Evidence list

– Synonym list

– Answers


– Error notifications

There you have it.  Go ahead and get all your medical questions answered by Watson.

Disclaimer: This article represents the author’s viewpoint only and doesn’t necessarily represent IBM’s positions, strategies or opinions

Other posts on Bluemix

1. Brewing a potion with Bluemix, PostgreSQL & Node.js in the cloud
2. A Bluemix recipe with MongoDB and Node.js
3. A Cloud Medley with IBM’s Bluemix, Cloudant and Node.js
4.  Bend it like Bluemix, MongoDB with autoscaling – Part 1

You may also find the following interesting
1. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
2. Informed choices through Machine Learning-2: Pitting together Kumble, Kapil,
3. A crime map of India in R: Crimes against women
4. Analyzing cricket’s batting legends – Through the mirage with R

Find me on Google+