The story of virtualization

The journey from the early days of batch processing to these days of virtualized computing has been truly an exciting march of progress. The innovations and ideas have truly transformed the computing landscape as we know it which promises of still more breathtaking changes to come.

Batch processing: Programs written on the computers of those days used punch cards also known as Hollerith cards. A separate terminal would be used to edit and create the program which would result in a stack of punched card. The different stacks of user programs would be loaded into a card reader which would then queue the programs for processing by the computers of those days. Each program would be executed in sequential order.

Imagine if our days were structured sequentially where would need a particular task to complete fully before we start another one. That would be a true waste of time. While each task progresses we could focus on other tasks.

The inefficiencies of batch processing soon became obvious and led to the development of multi-tasked systems in which each user’s applications is granted a slice of the CPU cycles for use. The Operating System (OS) would cycle through the list of processes granting then a specific number of cycles to compute each time. Soon this development led to different operating systems including Windows, Unix, Linux and so on.

Multitasking: Mutitasking evolved because designers realized that the Central Processor Unit (CPU) cycles were wasted when programs waited for input/output to arrive or complete. Hence the computer’s operating system(OS) or the central nervous system would swap the user’s program out of the CPU and grant the CPU to other user applications. This way the CPU is utilized efficiently.

The pen analogy : For this analogy let us consider a fountain pen to be the CPU. While Joe is writing a document, he uses the fountain pen. Now, lets assume that Joe needs to print a document. While Joe saunters to pick up his printout, the fountain pen is given to Kartik who needs his tax report. Kartik soon gets tired and takes a coffee break. Now the pen is given to Jane who needs to fill up a form. When Jane completes her form the pen is handed over to Joe who just returned with his print out. The pen (CPU) is thus used efficiently among the many users.

While multi-tasking was a major breakthrough it did lead to an organization’s applications being developed in different OS flavors. Hence a large organization would be left with software silos each with its own unique OS. This was a problem when the organization wanted to consolidate all its relevant software under a common umbrella. For e.g. A telecom operator may have payroll applications that run on Windows, accounting on Linux and human resources on Unix. It thus became difficult for the organization to get a holistic view of what happened in the Finance department as a whole. Enter ‘virtualization’. Virtualization enables applications created for different OS’es to run over a layer known as the “hypervisor” that abstracts the raw hardware.

Virtualization: Virtualization in essence abstracts the raw hardware through a software application called the Hypervisor. The Hypervisor runs on a bare metal of the CPU. Applications that run over the Hypervisor can choose the operating systems of their choice namely Windows, Linux, Unix etc. The Hypervisor would effectively translate the different OS instructions to the machine instructions of the underlying processor

The car analogy: Imagine that you got into a car. Once inside the car you had a button which when pressed would convert the car either into a roaring Ferrari, Lamborghini or a smooth Mercedes, BMW. The dashboard, the seats, engine all magically transformed into the car of your dreams. This is exactly what virtualization tries to achieve.

Server Pooling: However, virtualization went further than just enabling applications created on different OS to run on a single server loaded with the hypervisor. Virtualization also enabled consolidation of server farms. Virtualization brings together the different elements of an enterprise namely the servers each with its memory, processors and different storage options (disk attached storage (DAS), fiber channel storage access network (FC SAN), Network Access Storage (NAS)) and networking elements. Virtualization consolidates the compute, storage and networking elements together and provides an illusion where appropriate compute, storage and network are provided to applications on demand. The applications are provided with virtual machines with the necessary computing, storage and network units as required. Virtualization also took care of providing high availability(HA), mobility and security to the applications besides enabling an illusion of shared resources. Besides if the any of the servers on which an application is executing goes down for any reason the application is migrated seamlessly to another server.

The train analogy: Assume that there was train with ‘n’ number of wagons. Commuters can get on and get off at any station. When they get on the train they are automatically allocated a seat, a berth and so on. The train keeps track of how occupied the train is and provides the appropriate seating dynamically. If the wheels of any wagon gets stuck the passenger is lifted and shifted,seamlessly, to another wagon while the stuck wagon is safely de-linked from the train.

Virtualization has many applications. It is the dominant technology that is used in the creation of public, private or a hybrid cloud thus creating providing an on-demand scalable computing environment. Virtualization is also used in consolidation of server farms enabling optimum usage of the servers.

Find me on Google+

Profiting from a cloud deployment

Cloud computing does offer enterprises and organizations a mixed bag of goodies. For one it provides for a utility style computing, the ability to grow and shrink with changing loads, zero upfront costs etc. The benefits of cloud computing are many but does it all add up to profit for an enterprise? That is the critical question that needs to be answered.

This post will take a look on what it takes for a cloud deployment to be profitable for an organization.

The critical parameters for any web application are latency and throughput.  A well designed web application whether it is an e-retail site or an ad serving application will try to minimize the latency or response time while at the same time maximizing the throughput of the application. For any application while the latency can be kept within specified limits the throughput will tend to plateau at a certain level and will not increase with increasing traffic. Utilizing a larger instance can improve the throughput plateau slightly. In any case the reality is that throughput tends to flatten as the traffic is increased.

A typical cloud application will be made of several compute instances, database instances, DNS services etc. Cloud usage is billed by the hour. Hence we can represent the cost of a cloud deployment as follows

Cost (cloud deployment) = m * compute instance + n * database instance + o * network bytes + P

Where P = cost of DNS + Elastic IPs + other costs.

This can be represented by the formula

C = a * D * t

where C = cost of cloud deployment

D = costs per hour of the deployment

and ‘a’ is some arbitrary constant and ‘t’ is the time

Let us assume that for the cloud deployment we get a throughput of T.

The revenue for a web application whether it is an e-commerce site, an e-ticketing site or an ad serving engine will all depend on the throughput i.e. larger the throughput, larger the revenue and hence profit. We can then say that ‘R’ the revenue is

R (revenue) α k * T * t

In others words  the revenue is proportional to the throughput.

Hence to determine the profitability of a particular cloud deployment we need to compare the cost of the deployment for a given throughput versus a projected  profit margin. As long the cost of the deployment is less than the revenue arising from the throughput, the deployment will be profitable.  This can be represented pictorially as below.

The graph clearly shows that for a profitable deployment

d/dt (k * T *t) > d/dt (a * D * t) or

k * T > a * D

Hence as can seen from the picture as long as the slope of the cumulative deployment costs are less that the slope of the revenue the deployment will be profitable.

Find me on Google+

Cloud Computing – Show me the money!

Published in Telecom Lead – Cloud Computing – Show me the money!

A lot has been said about the merits of cloud computing and how it is going to be the technological choice of most enterprises in the not so distant future. But the key question that is bound to keep cropping up in the higher echelons of the enterprise is whether the cloud makes good business sense. While most know that cloud computing adopts a pay-per-use model similar to regular utilities like electricity and water and does away with upfront infrastructure costs to the organization the nagging question to most senior management people is whether cloud computing is prudent choice in the long term.

This is not an easy question to answer and depends on a multitude of factors. The alternative to cloud computing is to have an in-house infrastructure of servers, hardware and software, software licenses, broadband links, firewalls etc. All these will form the Capital Expenditure (CAPEX) for the organization. In addition to these expenses will be the Operational Expenditures (OPEX) of real estate to house the equipment, power supply systems, cooling systems, maintenance personnel, annual maintenance contracts (AMC) etc which will be recurring expenses for the organization.

Cloud Computing does away completely with procurement of hardware, software, databases, licenses etc and an enterprise should be able to host their application in a couple of hours provided they know ahead of time the resources their application will need.

Hence as can be seen while the upfront costs and the running costs of maintaining a data center will be high in comparison to the zero upfront costs of the deploying on the cloud the steeper operational costs of the cloud will eventually catch up with the in-house infrastructure.

Depending on how well the application is designed the point at which the cumulative running costs of the cloud breaks even with in-house data center can be made to occur a couple of years down the line after the application is deployed.  Assuming that the break even happens in 3 years the advantage of cloud deployment is that the enterprise does not have to worry about equipment obsolescence, upgrading of software etc not to mention the depreciation of the equipment costs.

Moreover cloud technology is extremely useful to enterprises which are planning to deploy application in which there is difficulty in forecasting the type of traffic that will be hit their application. Where the traffic may be intermittent, bursty or seasonal then a cloud makes perfect business sense since can it scale up or scale down depending on the traffic.

Some typical applications which are prime candidates for the cloud are CRM software, office tools, testing tools, online retail stores, webmail etc.

One possible worry of the enterprise will be the security concerns while deploying to the public cloud. In such situations the organization can take a hybrid strategy where their sensitive data are hosted in in-house data centers and their main application is hosted on a public cloud.

Hence in most situation cloud deployments do have a definite edge for certain key application of the enterprise.

Find me on Google+