The computer is not a dumb machine!

computer“The computer is a dumb machine. It needs to be told what to do at every step”. How often have we heard of this refrain from friends and those who have only an incidental interaction with computers?  To them a computer is like a ball which has to be kicked from place to place. These people are either ignorant of computers, say it by force of habit or have a fear of computers. However this is so far from the truth.  In this post, my 100th, I come to the defense of the computer in a slightly philosophical way.

The computer is truly a marvel of technology. The computer truly embodies untapped intelligence. In my opinion even a safety pin is frozen intelligence. From a piece of metal the safety pin can now hold things together while pinning them, besides incorporating an aspect of safety.

Stating that the computer is a dumb machine is like saying that a television is dumb and an airplane is dumber.  An airplane probably represents a modern miracle in which the laws of flight are built into every nut and bolt that goes into the plane. The electronics and the controls enable it to lift off, fly and land with precision  and perform a miracle in every flight.

Similarly a computer from the bare hardware to the upper most layer of software is nothing but layer and layer of human ingenuity, creativity and innovation. At the bare metal the hardware of the computer is made up integrated chips that work at the rate of 1 billion+ instructions per second. The circuits are organized so precisely that they are able to work together and produce a coherent output, all at blazing speeds of less than a billionth of a second.

computer3

On top of the bare bones hardware we have some programs that work at the assembly and machine code made of 0’s and 1’s. The machine code is nothing more than an amorphous strings of 0’s and 1’s. At this level   the thing that is worked on (object) and the thing that works on it (subject) are indistinguishable. There is no subject and object at this level. What distinguishes then is the context.

Over this layer we have the Operating System (OS) which I would like to refer to as the mind of the computer. The OS is managing many things all at once much like the mind has complete control over sense organs which receive external input. So the OS manages processes, memory, devices and CPU (resources)

As humans, we like to pride ourselves that we have consciousness. Rather than going into any metaphysical discussion on what consciousness is or isn’t it is clear that the OS keeps the computer completely conscious of the state of all its resources.  So just like we react to data received through our sense organs the computer reacts to input received through devices (mouse, keyboard) or its memory etc. So does the computer have consciousness?

You say human beings are capable of thought. So what is thought but some sensible evaluation of known concepts? In a way the OS is also constant churning in the background trying to make sense of the state of the CPU, the memory or the disk.

Not to give in I can hear you say “But human beings understand choice”. Really! So here is my program for a human being

If provoked
Get angry

If insulted
Get hurt

If ego stoked
Go mad with joy

Just kidding! Anyway the recent advances in cognitive computing now show it is possible to have computers choose the best alternative. IBM’s Watson is capable of evaluation alternative choices.

Over the OS we have compilers and above that we have several applications.
The computer truly represents layers and layers of solidified human thought. Whether it is the precise hardware circuitry, the OS, compilers, or any application they are all result of human thought and they are constantly working in the computer.

So if your initial attempt to perform something useful did not quite work out, you must understand that you are working with decades of human thought embodied in the computer. So your instructions should be precise and logical. Otherwise your attempts will be thwarted.

computer1
So whether it’s the computer, the mobile or your car, we should look and appreciate the deep beauty that resides in these modern conveniences, gadgets or machinery.

Find me on Google+

Working with binary trees in Lisp

Success finally! I have been trying to create and work with binary trees in Lisp for sometime. It is really difficult in Lisp in which there are no real data structures. Trees are so obvious in a language like C where one can visualize the left & right branches with pointers. The structure of a node in C is

struct node tree{
int value
struct node *left;
struct node *right;
}

Lisp has no such structures. Lisp is a list or at best a list of lists. It is really how the program interprets the nested list of lists that makes the list a binary or a n-ary tree. As I mentioned before I have not had a whole lot of success in creating the binary tree in Lisp for quite sometime. Finally I happened to come across a small version of adding an element to a set in “Structure and Interpretation of Computer Programs (SICP)” by Harold Abelson, Gerald and Julie Sussman. This is probably one of the best books I have read in a long time and contains some truly profound insights into computer programming.

I adapted the Scheme code into my version of a adding a node. Finally I was able to code inorder, pre-order and post order traversals.

(Note: You can clone the code below from GitHub : Binary trees in Lisp)

In this version the a node of a binary tree is represented as
(node left right) so
node -> (car tree)
left_branch -> (cadr tree)
right_branch is -> (caddr tree)

Here is the code
(defun entry (tree)
(car tree))

(defun left-branch (tree)
(cadr tree))

(defun right-branch (tree)
(caddr tree))

//Create node in a binary tree
(defun make-tree (entry left right)
(list entry left righta))

// Insert element into tree
add (x tree)
(cond ((null tree) (make-tree x nil nil))
((= x (entry tree)) tree)
((< x (entry tree)) (make-tree (entry tree) (add x
(left-branch tree)) (right-branch tree)))

((> x (entry tree)) (make-tree (entry tree)
(left-branch tree) (add x (right-branch tree))))))

So I can now create a tree with a create-tree function
(defun create-tree(elmnts)
(dolist (x elmnts)
(setf tree (add x tree)))
)

(setf tree nil)
NIL

(setf lst (list 23 12 1 4 5 28 4 9 10 45 89))
(23 12 1 4 5 28 4 9 10 45 89)

(create-tree lst)
NIL

Now I display the tree
tree

(23 (12 (1 NIL (4 NIL (5 NIL (9 NIL (10 NIL NIL))))) NIL) (28 NIL (45 NIL (89 NIL NIL))))

This can be represented pictorially as

Now I created the 3 types of traversals
(defun inorder (tree)
(cond ((null tree))
(t (inorder (left-branch tree))

(print (entry tree))
(inorder (right-branch tree))))))

(defun preorder (tree)
(cond ((null tree))

(t (print (entry tree))
(preorder (left-branch tree))
(preorder (right-branch tree))))))
(defun postorder (tree)
(cond ((null tree))
(t (postorder (left-branch tree))
(postorder (right-branch tree))

(print (entry tree)))))
[89]> (inorder tree)

1
4
5
9
10
12
23
28
45
89

[90]> (preorder tree)
23
12
1
4
5
9
10
28
45
89
T

[91]> (postorder tree)
10
9
5
4
1
12
89
45
28
23
23

Note:  A couple of readers responded to me saying that I very wrong in saying that Lisp has no data structures. I would tend to agree that Lisp would have evolved over the years to include data structures. I hope to pick on Lisp some time later from where I left off!. Till that time….

You may also like
1. A crime map of India in R: Crimes against women
2.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
3.  Bend it like Bluemix, MongoDB with autoscaling – Part 2
4. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
5. Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data

For all posts see Index of posts

Find me on Google+

Taking baby steps in Lisp

Lisp can be both fascinating and frustrating. Fascinating, because you can write compact code to solve really complex problems. Frustrating, because you can easily get lost in its maze of parentheses. I, for one, have been truly smitten by Lisp. My initial encounter with Lisp did not yield much success as I tried to come to terms with its strange syntax. The books I read on the Lisp language typically gloss over the exotic features of Lisp like writing Lisp code to solve the Towers of Hanoi or the Eight Queens problem. They talk about functions returning functions, back quotes and macros that can make your head spin.

I found this approach extremely difficult to digest the language. So I decided to view Lisp through the eyes of any other regular programming language like C, C++,, Java, Perl, Python or Ruby. I was keen on being able to do regular things with Lisp before I try out its unique features. So I decided to investigate Lisp from this view point and learn how to make Lisp do mundane things like an assignment, conditional, loop, array, input and output etc.

This post is centered on this fact.

Assignment statement

The most fundamental requirement for any language is to perform an assignment. For e.g. these are assignment statements in Lisp and its equivalent in C for e.g.

$ (setf x 5)                                                         -> $ x = 5
$ (setf x (+  (* y 2) (* z 8))                               -> $x = 2y + 8z

Conditional statement
 
There are a couple of forms of conditional statement in Lisp. The most basic is the ‘if’ statement which is special case. You can do if-then-else without the possibility of if-then-else if-else if – else

if (condition) statement else-statement

In Lisp this is written as
$(setf x 5)
$ (if (= x 5)
(setf x  (+ x 5))
(setf  (- x 6)))
10

In C this equivalent to
$ x = 5
$ if (x == 5)
x = x + 5;
else
x = x -6;

However Lisp allows the if-then-else if – else if –else through the use of the COND statement

So we could write

$ (setf x 10)
$ (cond ((< x 5) (setf x (+ x 8)) (setf y (* 2 y)))
((= x 10) (setf x (* x 2)))
(t (setf x 8)))
20

The above statement in C would be
$ x = 2
$ y = 10
$ if (x < 5)
{
x = x + 8;
y = 2 * y;
}
else if (x == 10)
{
x = x * 2;
}
else
x = 8;

Loops
Lisp has many forms of loops dotimes, dolist, do , loop for etc. I found the following most intuitive and best to get started with
$  (setf x 5)
$ (let ((i 0))
(loop
(setf y (* x i))
(when (> i 10) (return))
(print i) (prin1 y)
(incf i
)))

In C this could be written as
$ x = 5
(for i = 0; i < 10; i++)
{
y = x * i
printf(“%d %d\n”,i,y);
}

Another easy looping construct in C is
(loop for x from 2 to 10 by 3
do (print x))
In C this would be
(for x=2; x < 10; x = x+3)
print x;

Arrays
To create an array of 10 elements with initial value of 20
(setf numarray (make-array 10 :initial-element 20))
#(20 20 20 20 20 20 20 20 20 20)
To read an array element it is
$ (aref  numarray 3)                    – – – > numarray[3]
For e.g.
(setf x (* 2 (aref numarray 4)))     – – – – > x = numarray[4] * 2

Functions
(defun square (x)
(* x x))
This is the same as

int square (x)
{
return (x * x)
}

While in C you would invoke the function as
y = square (8)

In Lisp you would write as
(setf y (square 8))

Note: In Lisp the function is invoked as (function arg1 arg2… argn) instead of (function (arg1 arg2  … argn))

Structures
a) Create a global variable *db*
(defvar *db* nil)
 

b) Make a function to add an employee
$(defun make-emp (name age title)
(list :name name :age age :title title))
$(add-emp (make-emp “ganesh” 49 “manager”))
$(add-emp (make-emp “manish” 50 “gm”))
$(add-emp (make-emp “ram” 46 “vp”))
$ (dump-db)

For a more complete and excellent post on managing a simple DB looks at Practical Common Lisp by Peter Siebel

Reading and writing to standard output
To write to standard output you can use
(print “This is a test”) or
(print ‘(This is a test))
To read from standard input use
(let ((temp 0))
(print ‘(Enter temp))
(setf temp (read))
(print (append ‘(the temp is) (list temp))))

Reading and writing to a file
The typical way to do this is to use

a) Read
(with-open-file (stream “C:\\acl82express\\lisp\\count.cl”)
(do ((line (read-line stream nil)
(read-line stream nil)))
((null line))
(print line)))

b) Write
(with-open-file (stream “C:\\acl82express\\lisp\\test.txt”
:direction :output
:if-exists :supersede)
(write-line “test” stream)
nil)

I found the following construct a lot easier
(let ((in (open “C:\\acl82express\\lisp\\count.cl” :if-does-not-exist nil)))
(when in
(loop for line = (read-line in nil)
while line do (format t “~a~%” line))
(close in)))

With the above you can get started on Lisp. However with just the above constructs the code one writes will be very “non-Lispy”. Anyway this is definitely a start.

Find me on Google+

The Future of Programming Languages

How will the computing landscape evolve as we move towards the next millennium? Clearly the computer architecture will evolve towards a more parallel architecture with multiple CPUs each handling a part of the problem in parallel. However, programming parallel architectures is no simple task and will challenge the greatest minds.

In the future where the problems and the architectures will be extremely complex, the programming language will itself evolve towards simplicity. The programming language will be based on natural language that we use to define problems. Behinds the scenes of the natural language interface will be complex algorithms of Artificial Intelligence which will perform the difficult task of specifying the definition of problem into a high level programming language like C++, Java etc.

The Artificial Intelligence interface will handle the task of creating variables, forming loops and actually defining classes and handling errors. The code generated by the machine will have far less syntactical errors than those created by human beings. However while a large set of problems can be solved by the AI interface there will be a certain class of problems which will still require human intervention and the need to work in the languages of today.

One of the drivers for this natural language of programming of the future, besides the complexity of the computer architecture is the need to bring a larger section of domain experts to solve the problems in their fields without the need to learn all the complex syntax, and semantics of the current programming languages.

This will allow everybody from astrophysicists, geneticists, historians and statisticians to be able to utilize the computer to solve the difficult problems in their domain.

Find me on Google+

Ramblings on Lisp

In the world of programming languages Lisp can be considered truly ancient along with its companion FORTRAN. However it has survived till this day. Lisp had its origins as early as 1958 when John McCarthy of MIT published the design of the language in an ACM paper titled “Recursive Functions of Symbolic Expressions and Their Computation by Machine”. Lisp was invented by McCarthy as a mathematical notation of computer programs and is based on Lambda Calculus described by Alonzo Church in 1930.

Lisp has not had the popularity of other more recent languages like C, C++ and Java partly because it has an unfamiliar syntax and also has a very steep learning curve. The Lisp syntax can be particularly intimidating to beginners with its series of parentheses. However it is one of predominant language that is used in AI domain.

Some of the key characteristics of Lisp are

Lisp derives its name from LISt Processing. Hence while most programming languages try to compute on data, Lisp computes on a data structure namely the list. A list can be visualized as a collection of elements which can be either data or functions or lists themselves. Its power comes from the fact that the language includes in its syntax some of the operations that can be performed on a lists and lists of lists. In fact many key features of the Lisp Language have found their way into more current languages like Python, Ruby and Perl.

Second Lisp is a symbolic processing language. This ability to manipulate symbols gives Lisp a powerful edge over other Programming Languages in AI domains like theorem proving or natural language processing.

Thirdly Lisp uses a recursive style of programming. This makes the code much shorter than other languages. Recursion enables the expression of the problem as a combination of a terminating condition and a self describing sub problem.  However the singular advantage that Lisp has over other programming languages is that it uses a technique called “tail recursion” . The beauty of tail recursion is that computing  space is of the order of O(1) and not O(n) that is common in languages like C,C++,Java where the size of the stack grows with each subsequent recursive call.

Lisp blurs the distinction between functions and data. Functions use other functions as arguments during computations. Lisp lists are functions that operate on other functions and data in a self repeating fashion.

The closest analogy to this is to think of machine code which is sequence of 32 bit binary words. Both the logic and the data on which they operate are 32 bit binary words and cannot be distinguished unless one knows where the program is supposed to start executing. If one were to take the snapshot of consecutive memory locations we will encounter 32 bit binary words which represent either a logical or arithmetic operation on data which are also 32 bit binary words.

Lisp is a malleable language and allows the programmer to tailor the language for his own convenience. It allows the programmer to manipulate the language so that it suits the programming style of the programmer. Lisp programs evolve from the bottom-up rather from the top-down style adopted in other languages. The design methodology of Lisp programs takes an inside-out approach rather than an outside-in method.

Lisp has many eminent die-hard adherents who swear by the elegance and beauty of being able to solve difficult problems concisely. On the other hand there are those to whom Lisp represents an ancient Pharaoh’s curse that is difficult to get rid of.

However with Lisp, “Once smitten, you remain smitten”.

Find me on Google+

Singularity

Pete Mettle felt drowsy. He had been working for days on his new inference algorithm. Pete had been in the field of Artificial Intelligence (AI) for close to 3 decades and had established himself as the father of “semantics”. He was particularly renowned for his 3 principles of Artificial Intelligence. He had postulated the Principles of Learning as

The Principle of Knowledge Acquisition: This principle laid out the guidelines for knowledge acquisition by an algorithm. It clearly laid out the rules of what was knowledge and what was not. It could clearly delineate between the wheat and chaff from any textbook or research article.

The Principle of Knowledge Assimilation: This law gave the process for organizing the acquired knowledge in facts, rules and underlying principles. Knowledge assimilation involved storing the individual rules, the relation between the rules and provided the basis for drawing conclusions from them

The Principle of Knowledge Application: This principle according to Pete was the most important. It showed how all knowledge acquired and assimilated could be used to draw inferences andconclusions. In fact it also showed how knowledge could be extrapolated to make safe conclusions.

Zengine The above 3 principles of Pete were hailed as a major landmark in AI. Pete started to work on an inference engine known as “Zengine” based on his above 3 principles. Pete was almost finished fine tuning his algorithm. Pete wanted to test his Zengine on the World Wide Web. The World Wide Web had grown into gigantic proportions. A report in May 2025 issue of Wall Street Journal mentioned that the total data that was held in the internet had crossed 400 zettabytes and that the daily data stored on the web was close to 20 terabytes. It was a well known fact that there an enormous amount of information on the web on a wide variety of topics. Wikis, blogs, articles, ideas, social networks and so on there was a lot of information on almost every conceivable topic under the sun.

Pete was given special permission by the governments of the world to run his Zengine on the internet. It was Pete’s theory that it would take the Zengine close to at least a year to process the information on the web and make any reasonable inferences from them. Accompanied by world wide publicity Zengine started its work of trying to assimilate the information on the World Wide Web. The Zengine was programmed to periodically give a status update of its progress to Pete.

A few months passed. Zengine kept giving updates on the number of sites, periodicals, blogs it had condensed into its knowledge database. After about 10 months Pete received a mail. It read “Markets will crash on March 2026. Petrol prices will sky rocket – Zengine. Pete was surprised at the forecast. So he invoked the API to check on what basis the claim had been made. To his surprise and amazement he found that a lot events happening in the world had been used to make that claim which clearly seemed to point in that direction. A couple of months down the line there was another terse statement “Rebellion very likely in Mogadishu in Dec 2027″. – Zengine.The Zengine also came with corollaries to Fermat’s last theorem. It was becoming clear to Pete and everybody that the Zengine was indeed becoming smarter by the day..It became apparent to everybody when Zengine would become more powerful than human beings.

Celestial events: Around this time peculiar events were observed all over the world. There were a lot of celestial events that were happening. Phenomenon like the aurora borealis became common place. On Dec 12, 2026 there was an unusual amount of electrical activity in the sky. Everywhere there were streaks of lightning. By evening time slivers of lightning hit the earth in several parts of the world. In fact if anybody had viewed the earth from outer space then it would have a resembled a “nebula sphere” with lightning streaks racing towards the earth in all directions. This seemed to happen for many days. Simultaneously the Zengine was getting more and more powerful. In fact it had learnt to spawn of multiple processes to get information and return to it.

Time-space discontinuity: People everywhere were petrified of this strange phenomenon. On the one hand there was the fear of the takeover of the web by the Zengine and on the other was this increased celestial activity. Finally on the morning of Jan 2028 there was a powerful crack followed by a sonic boom and everywhere people had a moment of discontinuity. In the briefest of moments there was a natural time-space discontinuity and mankind had progressed to the next stage in evolution.

The unconscious, sub conscious and the conscious all became a single faculty of super consciousness. It has always been known from the time of Plato that man knows everything there is to know. According to Platonic doctrine of Recollection, human beings are born with a soul possessing all knowledge, and learning is just discovering or recollecting what the soul already knows. Similarly according to Hindu philosophy, behind the individual consciousness of the Atman, is the reality known as the Brahman which is universal consciousness attained in a deep state of mysticism through self-inquiry.

However this evolution by some strange quirk of coincidence seemed to coincide with the development of the world’s first truly learning machine. In this super conscious state a learning machine was not something to be feared but something which could be used to benefit mankind. Just like cranes can lift and earthmovers perform tasks that are beyond our physical capacity so also a learning machine was a useful invention that could be used to harness the knowledge from mankind’s storehouse – the World Wide Web.

Find me on Google+