Working with binary trees in Lisp

Success finally! I have been trying to create and work with binary trees in Lisp for sometime. It is really difficult in Lisp in which there are no real data structures. Trees are so obvious in a language like C where one can visualize the left & right branches with pointers. The structure of a node in C is

struct node tree{
int value
struct node *left;
struct node *right;
}

Lisp has no such structures. Lisp is a list or at best a list of lists. It is really how the program interprets the nested list of lists that makes the list a binary or a n-ary tree. As I mentioned before I have not had a whole lot of success in creating the binary tree in Lisp for quite sometime. Finally I happened to come across a small version of adding an element to a set in “Structure and Interpretation of Computer Programs (SICP)” by Harold Abelson, Gerald and Julie Sussman. This is probably one of the best books I have read in a long time and contains some truly profound insights into computer programming.

I adapted the Scheme code into my version of a adding a node. Finally I was able to code inorder, pre-order and post order traversals.

(Note: You can clone the code below from GitHub : Binary trees in Lisp)

In this version the a node of a binary tree is represented as
(node left right) so
node -> (car tree)
left_branch -> (cadr tree)
right_branch is -> (caddr tree)

Here is the code
(defun entry (tree)
(car tree))

(defun left-branch (tree)
(cadr tree))

(defun right-branch (tree)
(caddr tree))

//Create node in a binary tree
(defun make-tree (entry left right)
(list entry left righta))

// Insert element into tree
add (x tree)
(cond ((null tree) (make-tree x nil nil))
((= x (entry tree)) tree)
((< x (entry tree)) (make-tree (entry tree) (add x
(left-branch tree)) (right-branch tree)))

((> x (entry tree)) (make-tree (entry tree)
(left-branch tree) (add x (right-branch tree))))))

So I can now create a tree with a create-tree function
(defun create-tree(elmnts)
(dolist (x elmnts)
(setf tree (add x tree)))
)

(setf tree nil)
NIL

(setf lst (list 23 12 1 4 5 28 4 9 10 45 89))
(23 12 1 4 5 28 4 9 10 45 89)

(create-tree lst)
NIL

Now I display the tree
tree

(23 (12 (1 NIL (4 NIL (5 NIL (9 NIL (10 NIL NIL))))) NIL) (28 NIL (45 NIL (89 NIL NIL))))

This can be represented pictorially as

Now I created the 3 types of traversals
(defun inorder (tree)
(cond ((null tree))
(t (inorder (left-branch tree))

(print (entry tree))
(inorder (right-branch tree))))))

(defun preorder (tree)
(cond ((null tree))

(t (print (entry tree))
(preorder (left-branch tree))
(preorder (right-branch tree))))))
(defun postorder (tree)
(cond ((null tree))
(t (postorder (left-branch tree))
(postorder (right-branch tree))

(print (entry tree)))))
[89]> (inorder tree)

1
4
5
9
10
12
23
28
45
89

[90]> (preorder tree)
23
12
1
4
5
9
10
28
45
89
T

[91]> (postorder tree)
10
9
5
4
1
12
89
45
28
23
23

Note:  A couple of readers responded to me saying that I very wrong in saying that Lisp has no data structures. I would tend to agree that Lisp would have evolved over the years to include data structures. I hope to pick on Lisp some time later from where I left off!. Till that time….

You may also like
1. A crime map of India in R: Crimes against women
2.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
3.  Bend it like Bluemix, MongoDB with autoscaling – Part 2
4. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
5. Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data

For all posts see Index of posts

Find me on Google+

Test driving Apache Hadoop: Standalone & pseudo-distributed mode

The Hadoop paradigm originated from Google and is used in crunching large data sets. It is ideally suited for applications like Big Data, creating an inverted index used by search engines and other problems which require terabytes of data to processed in parallel. One key aspect of Hadoop is that it is made up of commodity servers. Hence server, disk crashes or network issues are assumed to be norm rather than an exception.

The Hadoop paradigm is made of the Map-Reduce  & the HDFS parts. The Map-Reduce has 2 major components to it. The Map part takes as input key- value pairs and emits a transformed key-value pair. For e.g Map could count the number of occurrences of words or created an inverted index of a word and its location in a document. The Reduce part takes as input the emitted key-value pairs of the Map output and performs another operation on the inputs from the Map part for.e.g summing up the counts of words. A great tutorial on Map-Reduce can be found at http://developer.yahoo.com/hadoop/tutorial/module4.html

The HDFS (Hadoop Distributed File System) is the special storage that is used in tandem with the Map-Reduce algorithm. The HDFS distributes data among Datanodes. A Namenode maintains the meta data of where individual pieces of data are stored.

To get started with Apache Hadoop download a stable release of Hadoop from (e.g. hadoop-1.0.4.tar.gz)

http://hadoop.apache.org/common/releases.html#Download

a) Install Hadoop on your system preferably in /usr/local/
tar xzf ./Downloads/hadoop-1.0.4.tar.gz

sudo mv hadoop-1.0.4 hadoop (rename hadoop-1.0.4 to hadoop for convenience)
sudo chown -R hduser:hadoop hadoop
Apache Hadoop requires Java to be installed. Download and install Java on your machine from

http://www.oracle.com/technetwork/java/javase/downloads/jdk-7u4-downloads-1591156.html

b) After you have installed java set $JAVA_HOME in
usr/local/hadoop/conf/hadoop-env.sh
export JAVA_HOME=/usr/bin/java (uncomment and set the correct path)

c) Create a user hduser in group hadoop
For this click  Applications->Other->User & Groups
Choose Add User – hduser  &  Add Group – hadoop
Choose properties and add hduser to the hadoop group.

Standalone Operation
Under root do
/usr/sbin/sshd
then
$ssh localhost

If you cannot do a ssh to localhost with passphrase then do the following
$ ssh-keygen -t rsa -P “”

You will get the following

Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.


$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

Now  re-run
$ssh localhost – This time it should go fine

Create a directory input and copy *.xml files from conf/
$mkdir input
$cp /usr/local/hadoop/share/hadoop/templates/conf/*.xml input

Then execute the following. This searches for the string “dfs*” in all the XML files under the input directory
$/usr/local/hadoop/bin/hadoop jar /usr/share/hadoop/hadoop-examples-1.0.3.jar grep input output ‘dfs[a-z.]+’
You should see
[root@localhost hadoop]# /usr/local/hadoop/bin/hadoop jar /usr/share/hadoop/hadoop-examples-1.0.3.jar grep input output ‘dfs[a-z.]+’
12/06/10 13:00:51 INFO util.NativeCodeLoader: Loaded the native-hadoop library
..

12/06/10 13:01:45 INFO mapred.JobClient:     Reduce output records=38
12/06/10 13:01:45 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=0
12/06/10 13:01:45 INFO mapred.JobClient:     Map output records=38

Where it indicates that there are 38 such record strings with dfs* in it.
If you get an error
java.lang.OutOfMemoryError: Java heap space
then increase the heap size from 128 to 1024 as below

<property>
<name>mapred.child.java.opts</name>
<value>-server -Xmx1024m -Djava.net.preferIPv4Stack=true</value>
</property>
….

Pseudo distributed mode
In the pseudo distributed mode separate Java processes are started for the Job Tracker which schedules tasks, the Task tracker which executes tasks and the Namenode which contains the data
A good post on Hadoop standalone mode is given in Michael Nolls post – Running Hadoop on Ubuntu Linux (single node cluster)

a) Execute the following commands
. ./.bashrc
$mkdir -p /home/hduser/hadoop/tmp
$chown hduser:hadoop /home/hduser/hadoop/tmp
$chmod 750 /home/hduser/hadoop/tmp

b) Do the following
Note:Files  core-site.xml, mapred-site,xml & hdfs-site.xml exist under
/usr/local/hadoop/share/hadoop/templates/conf& usr/local/hadoop/conf
It appears that Apache hadoop gives precedence to /usr/local/hadoop/conf. So add the following between <configuration and </configuration>
In file /usr/local//hadoop/conf/core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hduser/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  Either the
literal string “local” or a host:port for NDFS.
</description>
<final>true</final>
</property>

In  /usr/local//hadoop/conf//mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<final>true</final
</property>

In  /usr/local//hadoop/conf//hdfs-site.xml add
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

Now perform
$sudo /usr/sbin/ssh
$ssh localhost (Note you may have to generate key as above if you get an error)

c) Since the pseudo distributed mode will use the HDFS file system we need to format this.So run the following command
$$HADOOP_HOME/bin/hadoop namenode -format
[root@localhost hduser]#  /usr/local/hadoop/bin/hadoop namenode -format
12/06/10 15:48:16 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.0.3
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1335192; compiled by ‘hortonfo’ on Tue May  8 20:16:59 UTC 2012
************************************************************/

12/06/10 15:48:17 INFO common.Storage: Image file of size 110 saved in 0 seconds.
12/06/10 15:48:18 INFO common.Storage: Storage directory /home/hduser/hadoop/tmp/dfs/name has been successfully formatted.
12/06/10 15:48:18 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
d) Now start all the Hadoop processes
$/usr/local/hadoop/bin/start-all.sh
starting namenode, logging to /var/log/hadoop/root/hadoop-root-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-localhost.localdomain.out
localhost: starting secondarynamenode, logging to /var/log/hadoop/root/hadoop-root-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /var/log/hadoop/root/hadoop-root-jobtracker-localhost.localdomain.out ocalhost: starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-localhost.localdomain.out
Verify that all processes have started by executing /usr/java/jdk1.7.0_04/bin/jps
[root@localhost hduser]# /usr/java/jdk1.7.0_04/bin/jps
10971 DataNode
10866 NameNode
11077 SecondaryNameNode
11147 JobTracker
11264 TaskTracker
11376 Jps

You will see JobTracker,taskTracker,NameNode,DataNode and SecondaryNameNode

You can also do netstat -plten | grep java
tcp        0      0 0.0.0.0:50090               0.0.0.0:*                   LISTEN      0          166832     11077/java
tcp        0      0 0.0.0.0:50060               0.0.0.0:*                   LISTEN      0          167407     11264/java
tcp        0      0 0.0.0.0:50030               0.0.0.0:*                   LISTEN      0          166747     11147/java
tcp        0      0 0.0.0.0:50070               0.0.0.0:*                   LISTEN      0          165669     10866/java
tcp        0      0 0.0.0.0:50010               0.0.0.0:*                   LISTEN      0          166951     10971/java
tcp        0      0 0.0.0.0:50075               0.0.0.0:*                   LISTEN      0          166955     10971/java
tcp        0      0 127.0.0.1:55839             0.0.0.0:*                   LISTEN      0          166816     11264/java
tcp        0      0 0.0.0.0:50020               0.0.0.0:*                   LISTEN      0          165843     10971/java
tcp        0      0 127.0.0.1:54310             0.0.0.0:*                   LISTEN      0          165535     10866/java
tcp        0      0 127.0.0.1:54311             0.0.0.0:*                   LISTEN      0          166733     11147/java

e) Now copy files from your local directory /home/hduser/input  to the HDFS file system
Now you can check the web interface for JobTracker & NameNode
This is as per mapred-site.xml & hdfs-site.xml in /conf directory. They are at
ñJobTracker – http://localhost:50030/

ñNameNode – http://localhost:50070/

f) Copy files from your local directory to HDFS
$/usr/local/hadoop/bin/hadoop dfs -copyFromLocal /home/hduser/input /user/hduser/input
Ensure that the files have been copies by listing the contents of HDFS

g) Check that files have been copied
$/usr/local/hadoop/bin/hadoop dfs -ls /user/hduser/input
Found 9 items
-rw-r–r–   1 root supergroup       7457 2012-06-10 10:31 /user/hduser/input/capacity-scheduler.xml
-rw-r–r–   1 root supergroup       2447 2012-06-10 10:31 /user/hduser/input/core-site.xml
-rw-r–r–   1 root supergroup       2300 2012-06-10 10:31 /user/hduser/input/core-site_old.xml
-rw-r–r–   1 root supergroup       5044 2012-06-10 10:31 /user/hduser/input/hadoop-policy.xml
-rw-r–r–   1 root supergroup       7595 2012-06-10 10:31 /user/hduser/input/hdfs-site.xml

h) Now execute the grep functionality
[root@localhost hduser]# /usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/hadoop-examples-1.0.4.jar grep /user/hduser/input /user/hduser/output ‘dfs[a-z.]+’
12/06/10 10:34:22 INFO util.NativeCodeLoader: Loaded the native-hadoop library
….
12/06/10 10:34:23 INFO mapred.JobClient: Running job: job_201206101010_0003
12/06/10 10:34:24 INFO mapred.JobClient:  map 0% reduce 0%
12/06/10 10:34:48 INFO mapred.JobClient:  map 11% reduce 0%


12/06/10 10:35:21 INFO mapred.JobClient:  map 88% reduce 22%
12/06/10 10:35:24 INFO mapred.JobClient:  map 100% reduce 22%
12/06/10 10:35:27 INFO mapred.JobClient:  map 100% reduce 29%
12/06/10 10:35:36 INFO mapred.JobClient:  map 100% reduce 100%
12/06/10 10:35:42 INFO mapred.JobClient: Job complete: job_201206101010_0003
….
12/06/10 10:36:16 INFO mapred.JobClient:     Reduce input groups=3
12/06/10 10:36:16 INFO mapred.JobClient:     Combine output records=0
12/06/10 10:36:16 INFO mapred.JobClient:     Physical memory (bytes) snapshot=180502528
12/06/10 10:36:16 INFO mapred.JobClient:     Reduce output records=36
12/06/10 10:36:16 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=695119872
12/06/10 10:36:16 INFO mapred.JobClient:     Map output records=36
i) Check the result
[root@localhost hduser]# /usr/local/hadoop/bin/hadoop dfs -cat /user/hduser/output/*
6          dfs.data.dir
2          dfs.
2          dfs.block.access.token.enable
2          dfs.cluster.administrators
2          dfs.datanode.address
2          dfs.datanode.data.dir.perm
2          dfs.datanode.http.address
2          dfs.datanode.kerberos.principal
2          dfs.datanode.keytab.file
2          dfs.exclude
…..

j)/usr/local/hadoop/bin/stop-all.sh
Have fun with hadoop…

Find me on Google+

Sneak preview of Windows 8 with VMWare Workstation 8.0.3

Here’s a sneak preview of Windows 8 evaluation version using VMWare’s Workstation 8.0.3. For those who read my earlier post “Experiences with VMWare Workstation 8.0.3 : The good, bad and the Ugly” the Windows 8 VM experience  must definitely rate as good. The setup and installation of Windows 8 in Workstation was a breeze. There was just one hiccup which is mentioned below.

The initial experience with Windows 8 is truly breath taking. The metro-style screen with its mosaic of tiles looks really great. Besides, Microsoft with Windows 8 is definitely taking the right path with a tile for the App Store and the SkyDrive. More on that later…

To get started download Windows 8 Release preview ISO image from http://windows.microsoft.com/en-US/windows-8/iso. Make a note of the Product key in the page.

Start your VMWare Workstation and choose “Create a new VM”. Browse to the directory which has the ISO image start the VM. Use the product key that you made a note of in the download page. While the installation will start you are bound to run into the error “Windows cannot read the product key from the unattend answer file”. To fix this issue power off the Windows 8 VM. Now select the “Settings” of the VM and remove floppy drive from the settings. Now Power on your VM. This time things should go smoothly and you installation process should begin.

Soon you should see Windows 8 installation screen

Choose the custom option as shown below

The installation should start and you should see

Follow the prompts and pretty soon you should see a really appealing Windows 8 metro style screen. The screen has a really cool tiled look. In fact with this look icons seem almost passe.

A quick look at this screen and you will see that Microsoft has now included the Store (App Store) and the SkyDrive. I am certain both of these will be put to great use in the future. Games and apps will be downloaded from the App Store. Play around the desktop.

Windows 8 is supposed to be based on touch where the user touches the screen to select an application. To navigate between applications or to get back to the metro-style screen move the mouse to the lower left corner of the screen and you should see a small metro-style screen. The top left corner has your current running applications.

I wanted to check out the Skydrive. So I created 2 text files in my Documents folder and selected Skydrive.

You can right click the files and select them. Go the bottom right corner and right click. You should see the task bar pop up. Click add and you will get a screen as shown below

Uploading files and folders to the cloud is bound to be commonly used in the not too distant future. The Skydrive right on your desktop will be a god send for users who want to keep a back up copy on the Cloud.

The App Store is another alluring addition.

If the boot  and load times of applications are fast in Windows 8 then Windows 8 looks to be a clear winner.

In fact with the stylish tiled look, touch interface, app store and Skydrive Windows 8 may actually give iPad a run for its money given the fact that Windows 8 provides actual computing capability in addition to consuming content.

Find me on Google+

Taking baby steps in Lisp

Lisp can be both fascinating and frustrating. Fascinating, because you can write compact code to solve really complex problems. Frustrating, because you can easily get lost in its maze of parentheses. I, for one, have been truly smitten by Lisp. My initial encounter with Lisp did not yield much success as I tried to come to terms with its strange syntax. The books I read on the Lisp language typically gloss over the exotic features of Lisp like writing Lisp code to solve the Towers of Hanoi or the Eight Queens problem. They talk about functions returning functions, back quotes and macros that can make your head spin.

I found this approach extremely difficult to digest the language. So I decided to view Lisp through the eyes of any other regular programming language like C, C++,, Java, Perl, Python or Ruby. I was keen on being able to do regular things with Lisp before I try out its unique features. So I decided to investigate Lisp from this view point and learn how to make Lisp do mundane things like an assignment, conditional, loop, array, input and output etc.

This post is centered on this fact.

Assignment statement

The most fundamental requirement for any language is to perform an assignment. For e.g. these are assignment statements in Lisp and its equivalent in C for e.g.

$ (setf x 5)                                                         -> $ x = 5
$ (setf x (+  (* y 2) (* z 8))                               -> $x = 2y + 8z

Conditional statement
 
There are a couple of forms of conditional statement in Lisp. The most basic is the ‘if’ statement which is special case. You can do if-then-else without the possibility of if-then-else if-else if – else

if (condition) statement else-statement

In Lisp this is written as
$(setf x 5)
$ (if (= x 5)
(setf x  (+ x 5))
(setf  (- x 6)))
10

In C this equivalent to
$ x = 5
$ if (x == 5)
x = x + 5;
else
x = x -6;

However Lisp allows the if-then-else if – else if –else through the use of the COND statement

So we could write

$ (setf x 10)
$ (cond ((< x 5) (setf x (+ x 8)) (setf y (* 2 y)))
((= x 10) (setf x (* x 2)))
(t (setf x 8)))
20

The above statement in C would be
$ x = 2
$ y = 10
$ if (x < 5)
{
x = x + 8;
y = 2 * y;
}
else if (x == 10)
{
x = x * 2;
}
else
x = 8;

Loops
Lisp has many forms of loops dotimes, dolist, do , loop for etc. I found the following most intuitive and best to get started with
$  (setf x 5)
$ (let ((i 0))
(loop
(setf y (* x i))
(when (> i 10) (return))
(print i) (prin1 y)
(incf i
)))

In C this could be written as
$ x = 5
(for i = 0; i < 10; i++)
{
y = x * i
printf(“%d %d\n”,i,y);
}

Another easy looping construct in C is
(loop for x from 2 to 10 by 3
do (print x))
In C this would be
(for x=2; x < 10; x = x+3)
print x;

Arrays
To create an array of 10 elements with initial value of 20
(setf numarray (make-array 10 :initial-element 20))
#(20 20 20 20 20 20 20 20 20 20)
To read an array element it is
$ (aref  numarray 3)                    – – – > numarray[3]
For e.g.
(setf x (* 2 (aref numarray 4)))     – – – – > x = numarray[4] * 2

Functions
(defun square (x)
(* x x))
This is the same as

int square (x)
{
return (x * x)
}

While in C you would invoke the function as
y = square (8)

In Lisp you would write as
(setf y (square 8))

Note: In Lisp the function is invoked as (function arg1 arg2… argn) instead of (function (arg1 arg2  … argn))

Structures
a) Create a global variable *db*
(defvar *db* nil)
 

b) Make a function to add an employee
$(defun make-emp (name age title)
(list :name name :age age :title title))
$(add-emp (make-emp “ganesh” 49 “manager”))
$(add-emp (make-emp “manish” 50 “gm”))
$(add-emp (make-emp “ram” 46 “vp”))
$ (dump-db)

For a more complete and excellent post on managing a simple DB looks at Practical Common Lisp by Peter Siebel

Reading and writing to standard output
To write to standard output you can use
(print “This is a test”) or
(print ‘(This is a test))
To read from standard input use
(let ((temp 0))
(print ‘(Enter temp))
(setf temp (read))
(print (append ‘(the temp is) (list temp))))

Reading and writing to a file
The typical way to do this is to use

a) Read
(with-open-file (stream “C:\\acl82express\\lisp\\count.cl”)
(do ((line (read-line stream nil)
(read-line stream nil)))
((null line))
(print line)))

b) Write
(with-open-file (stream “C:\\acl82express\\lisp\\test.txt”
:direction :output
:if-exists :supersede)
(write-line “test” stream)
nil)

I found the following construct a lot easier
(let ((in (open “C:\\acl82express\\lisp\\count.cl” :if-does-not-exist nil)))
(when in
(loop for line = (read-line in nil)
while line do (format t “~a~%” line))
(close in)))

With the above you can get started on Lisp. However with just the above constructs the code one writes will be very “non-Lispy”. Anyway this is definitely a start.

Find me on Google+

Experiences with VMWare Workstation 8.0.3 – The good,bad and the ugly

VMs( virtual machines) are the fundamental unit of the cloud. So I was interested in getting my hands around virtualization and virtual machines. Fortunately VMWare’s Workstation provides you with the opportunity. VMWare gives the user a 30 day evaluation license to evaluate Workstation 8.0.3. So I downloaded VMWare’s Workstation 8.0.3 to my desktop in Windows XP. If you had read my earlier post “Installing and configuring Fedora 16 with Windows XP using a bootable USB”  I had a dual boot desktop running either XP or Fedora 16.

Installing and getting Workstation 8.0.3 started was a breeze. I then created a VM using the Fedora 16 ISO file which I had downloaded for creating the dual boot Fedora & XP. The Workstation created a VM for me in a couple of minutes. As before the VM running Fedora 16 has LiveCD in its top right corner. Or in other words it is running the OS of a virtual CD. Anyway it was great and seemed really easy to get started.

The Ugly: I wanted to do more things with the VM. I read up the documentation on “Using the WorkStation” etc.  I wanted to install the VMWare Tools, clone a VM, save a snapshot etc. When I tried to install Vmware Tools I got a message saying the CDRom was in use. I checked the setting and found the CDRom was being used to boot Fedora. I actually needed to “install to disk”. .  As in previous post I decided I needed to create free space. Unfortunately I got ahead of myself I think. While the workstation was running I tried to access Windows Disk Management. This took a long time and I also got a message saying that the device was busy. As an afterthought it appears perfectly reasonable as the Workstation must have allocated space for the VM on the disk and must have held the disk. My disk had a primary partition C: drive with Win XP, a free logical drive D: and a partition holding my Fedora 16. I foolishly deleted drive D: This is where all hell broke loose. This took a long time. When I opened Disk Management again I found that the values it was showing was out of whack. It was C: drive 1820 GB when it should have been 70GB. D:drive as 2087 GB and also sorts vague figures. I realized that I had messed up my disk.

 

Here a thought struck me. Maybe if I restart the system the OS will work things out. But alas when I rebooted I got this

error: No such partition

grub rescue>

I knew I had really messed up. I could not boot my system. As I had mentioned before I could not boot Windows from my CD drive as it did not work. After trying a couple of different things I tried to boot with my USB drive.

Thank God I was able to boot Fedora. I then used fdisk to see my partitions. I realized I had clobbered my 2nd partition which was showing an incorrect size. I used fdisk to set the size right. I then installed Fedora 16 on my PC by writing to disk. Unfortunately I lost my XP drive and I was left with a Linux only PC. Thanks to my fortune there was no data that I had lost. This was a new PC which I had got.

The Bad: Now with Fedora 16 up and running I decided I thought I will try to install Workstation 8.0.3 on Fedora 16. I downloaded Workstation 8.0.3 bundle and extracted it. But when it tried to run it I ran into my 1st problem.

Cannot load module pk-gtk-module & canberra-gtk-module and I also got a message “Failed”

So with some googling I found that I needed to do

yum install PackageKit-gtk-module &

yum install libcanberra-gtk2 libcanberra-gtk3 libcanberra-gtk2.i686 libcanberra-gtk3.i686

I also got the message that some kernel files needed to be compiled. When I went and checked

/lib/modules/3.1.0-7.fc16.i686 I founf that the “build” link was broken.  So I set off on another google search on how to fix the fedora build broken situation. Finally I found the answer here

http://www.linuxquestions.org/questions/linux-newbie-8/broken-link-fedora-build-rpmtree-does-not-exist-520474/

I did a

yum install kernel-devel

As mentioned in the link above I rebooted the system. This seemed to create

/lib/modules/3.3.7-1.fci6.686.

The build in this directory was fine. Also giving uname -a showed that the kernel was updated to the new version.

I tried to start the Workstation 8.0.3 again. Now the number of complaints was less. I got a message saying the files needed to be compiled. When I clicked ok it went through the compilation process. I knew I was making progress. But anyway it once again bailed out with

Gtk-Message: Failed to load module “pk-gtk-module”: libpk-gtk-module.so: cannot open shared object file: No such file or directory

Gtk-Message: Failed to load module “canberra-gtk-module”: libcanberra-gtk-module.so: cannot open shared object file: No such file or directory

It appears that there is a patch which needs to be applied to fix the kernel. I used the following from this post

http://communities.vmware.com/thread/343441/

I downloaded workstation-8.0.2-linux3.2patch (15K) and ran the commands

cd /usr/lib/vmware/modules/source

tar xfv vmnet.tar

patch -p0 < ~/workstation-8.0.2_linux-3.2.patch

tar cfv vmnet.tar vmnet-only/

vmware-modconfig –console –install-all

Though my workstation 8.0.3 this went through fine.

I also did

LD_LIBRARY_PATH=/usr/lib/gtk-2.0/modules

export LD_LIBRARY_PATH

I started the workstation 8.0.3 and lo and behold it finally came up.

I then downloaded Fedora 17 ISO file (http://fedoraproject.org/en/get-fedora-options( and created a VM with that. My PC with about ~ 1G ram groaned. It tool nearly 20 mins to be up and running.

The searching and fixing took me nearly 7 – 8 hours. I was finally able to get workstation 8.0.3 up and running with Fedora 17 VM

Also do take a look at  the good part of  VMWare Workstation 8.0.3 Sneak preview of Windows 8 with VMWare Workstation 8.0.3

Find me on Google+