Dissecting the Cloud – Part 1

“The Cloud brings it with it the promise of utility-style computing and the ability to pay according to usage.

Cloud Computing provides elasticity or the ability to grow and shrink based on traffic patterns.

Cloud Computing does away with CAPEX and the need to buy infrastructure upfront and replaces it with OPEX model and so on”.

All this old news and has been repeated many times. But what exactly constitutes cloud computing? What brings about the above features? What are its building blocks of the cloud that enable one to realize the above?

This post tries to look deeper into the innards of the Cloud to determine what the cloud really is.

Before we get to this I would like to dwell on an analogy to understand the Cloud better.

Let us assume, Mr. A owns a large building of about 15,000 sq feet and about 100 feet tall. Let us assume that Mr. A wants to rent this building.

Now, assume that the door of this building opens to single, large room on the inside!

Mr. X comes to rent this building. If this was the case then poor Mr. X would have to pay through his nose, presumably, for the entire building even though his requirement would have been for a small room of about 600 x 600 feet. Imagine the waste of space. Moreover this would also have resulted in an enormous waste of electricity. Imagine the lighting needed. Also an inordinate amount of water would have to be utilized if this single, large room needed to be cleaned. The cost for all of this would have to be borne by Mr. X.

This is clearly not a pleasant state of affairs for either Mr. X or for the owner Mr. A of the building.

The solution to this is easy.  What Mr. A needs to do, is to partition the building into self-contained rooms (600 x 600 sq feet) with all the amenities. Each self-contained unit would need to have its own electricity and water meter.

Now Mr. A can rent rooms to different tenants on their need basis. This is a win-win situation both for Mr, A and Mr. X. The tenants only need to pay for the rooms they occupy and the electricity and water they consume.

This is exactly the principle behind cloud computing and is known as ‘virtualization’

There are 3 computing components that one must consider. CPU, Network and Storage. The below picture shows the virtualization of CPU,RAM, NIC (network card), Disk (storage)

Server-Virtualization-Logical-View

The Cloud is essentially made up of  anywhere between 100 servers to 100,000 servers. The servers are akin to the large building. Running a single OS and application(s) on the entire server is a waste of computing, storage and network resources.

Virtualization abstracts the hardware, storage and network through the use of software known as the ‘hypervisor’. On top of the hypervisor several ‘guest OSes’ can run. Applications can then run on these guest OSes.

Hence over the CPU (single, dual or multi-core) of the server,  multiple guest OS’es  can run each with its own set of applications

This is similar to partitioning the large CPU resource of the server into smaller units.

There are 3 main Virtualization technologies namely VMware, Citrix and MS Hyper-V

Here is a diagram showing the 3 main the virtualization technologies

thumb_server_virtualization_lrg

To be continued …


Find me on Google+

The emergence of Social Software as a Service (SSaaS)

Published in Telecom Asia-17 Feb 2012 as – The dawn of Social Software as a Service

We are in the midst of a Social Networking revolution as we progress to the next decade. As technology becomes more complex in a flatter world, cooperating and collaborating will not only be necessary but also essentially imperative. McKinsey in its recent report “Wiring the Open Source Enterprise” talks of the future of a “networked enterprise” which will require the enterprise to integrate Web 2.0 technologies into its enterprise computing fabric.

Another McKinsey report  “The rise of the networked enterprise: Web 2.0 finds its payday” states “that Web 2.0 payday could be arriving faster than expected”. It goes on to add that “a new class of company is emerging—one that uses collaborative Web 2.0 technologies intensively to connect the internal efforts of employees and to extend the organization’s reach to customers, partners, and suppliers”

Social Software utilizing Web 2.0 technologies will soon become the new reality if organizations want to stay agile. Social Software includes those technologies that enable the enterprise to collaborate through blogs, wikis, podcasts, and communities. A collaborative environment will unleash greater fusion of ideas and can trigger enormous creative processes in the organization.

According to Prof. Clay Shirky ofNew YorkUniversitythe underused human potential at companies represents an immense “cognitive surplus” which can be tapped by participatory tools such as Social Software.

A fully operational social network in the organization will enable quicker decision making, trigger creative collaboration and bring together a faster ROI for the enterprise. A shared knowledge pool enables easier access to key information from across the enterprise and facilitates faster decision making.

Enterprise Social Software enables to access a shared knowledge pool across the organization. Employees can share ideas, seek out expert opinion and arrive at solutions much faster.  Social collaboration tools can truly unleash a profusion of creative ideas and thought across the organization and enable better problem solving abilities.

Clearly the social network paradigm is new concept which needs to be adopted by any organization which wants a greater marker share and a faster time to market. In today’s knowledge intensive world the need for an enterprise strategy that is focused on enabling collaboration through the use of Web 2.0 becomes obvious.

However enterprises which would like to embrace Social Technologies would face the twin challenges of i) developing the application and ii) secondly deploying it on their own data center.

Enterprises would be faced with the typical “build-vs.-buy” quandary. Organizations that want to benefit quickly from Web 2.0 technologies would prefer a buy rather than a build option.

Besides, the deployment of a Social Computing platform would require the commissioning of large data centers to allow for simultaneous access by the platform users. But the attendant problems of maintaining a large data center can be very intimidating.  The top 3 challenges of large data centers typically center around the

a)      The problems of data growth
b)      The challenges of performance and scalability
c)      And the sticky issue of network congestion and connectivity

It is against this backdrop of relevance of Social Software vis-à-vis the enterprises’ need for collaboration tools that Social Software as a Service (SSaaS) makes eminent sense.

If SSaaS could be provided as a service to enterprises with the option of either deploying it on a public or a private cloud it would make the service very attractive.

Enterprises would not have to go through the software development lifecycle of developing the social collaboration tools besides also saving them the upfront capital expenditures of creating the associated data centers. In addition the enterprise would also not have to face the technical challenges of maintaining the data centers.

Enterprises could either license the SSaaS tools only for the organization’s internal use among its employees or it could open it to its employees, suppliers and partners enabling a greater collaboration of ideas and thoughts.

The SSaaS & cloud service provider would charge the enterprise on a pay-per-use policy based on the number of users of its compute, storage and its network.

An SSaaS service would be a win-win for both the service provider and also the enterprise which can tap the creative potential of its employees.

Social Software as a Service (SSaaS) will be extremely attractive as we move to a flatter and a more knowledge intensive world.

Find me on Google+

Towards an auction-based Internet

The post below was quoted and discussed extensively in (see the link) GigaOM, 14 Jan 2011 – Software Defined Networks could create an auction-based bazaar.

Published in Telecom Asia, Jan 13,2012 – Towards an auction-based internet

Are we headed to an auction-based Internet? This train of thought (no pun intended), which struck me while I was travelling from Chennai to Bangalorelast evening, was the result of the synthesis  of different ideas and technologies which I had read  in the recent past.

The current state of technology and the technology trends do seem to indicate such a possibility.  An auction-based internet would be a business model in which bandwidth would be allocated to different data traffic on the internet based on dynamic bidding by different network elements. Such an eventuality is a distinct possibility considering the economics and latencies involved in data transfer, the evolution of the smart grid concept and the emergence of the promising technology known as the OpenFlow protocol.  This is further elaborated below

Firstly, in the book “Grids, cloud and virtualization”, by Massimo Caforo and Giovanni Aloisio, the authors highlight a typical problem of the computing infrastructure of today. In the book, the authors contend that a key issue in large scale computing is data affinity, which is the result of the dual issues of data latency and the economics of data transfer. They quote, Jim Gray (Turing award in 1998) whose paper on “Distributed Computing Economics” states that that programs need to be migrated to the data on which they operate rather than transferring large amounts of data to the programs.  This is in fact used in the Hadoop paradigm, where the principle of locality is maintained by keeping the programs close to the data on which they operate.

The book highlights another interesting fact. It says “cheapest and fastest way to move a Terabyte cross country is sneakernet (i.e. the transfer of electronic information, especially computer files, by physically carrying removable media such as magnetic tape, compact discs, DVDs, USB flash drives, or external drives from one computer to another). Google used sneakernet to transfer 120 TB of data. The SETI@home also used sneakernet to transfer data recorded by their telescopes inArecibo, Puerto Rico stored in magnetic tapes toBerkeley,California.

It is now a well known fact that mobile and fixed line data has virtually exploded clogging the internet. YouTube, video downloads and other streaming data choke the data pipes of the internet and Service Providers have not found a good way to monetize this data explosion. While there has been a tremendous advancement in CPU processing power (CPU horsepower in the range of petaflops) and enormous increases in storage capacity(of the order of petabytes) coupled with dropping prices,  there has been no corresponding drop in bandwidth prices in relation to the bandwidth capacity.

Secondly, in the book “Hot, flat and crowded” Thomas L. Friedman  describes the “Smart Homes” of the future in which all the home appliances will have sensors and will participate in the energy auction in real time as a part of the Smart Grid.  The price of energy in the Energy Grid fluctuates like stock prices since enterprises are bidding for energy during the day. In his Smart Home, Friedman envisions a situation in which the washing machine will turn on during off-peak hours when the prices of energy in the energy grid is low. In this way all the appliances in the homes of the future will minimize energy consumption by adjusting the cycles accordingly.

Why could not the internet also behave in a similar fashion? The internet pipes get crowded at different periods of the day, during seasons and during popular sporting events. Why cannot we have an intelligent network in place in which price of different data transfer rates vary depending on the time of the day, the type of traffic and the quality of service required.  Could the internet be based on an auction-mechanism in which different devices bid for bandwidth based on the urgency, speed and quality of services required? Is this possible with the routers, switches of today?

The answer is yes. This can be achieved by the new, path breaking innovation known as Software Defined Networks (SDNs) based on the OpenFlow protocol. SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate.  Do read my post Software Defined Networks : A glimpse of tomorrow   for a more detailed look at SDNs. SDNs can be made to dynamically route traffic flows based on decisions in real time.  The flow of data packets through the network can be controlled in a programmatic manner through the OpenFlow protocol. In order to dynamically allocate smaller or fatter pipes for different flows, it necessary for the logic in the Flow Controller to be updated dynamically based on the bid price.

For e.g. we could assume that a corporate has 3 different flows namely, immediate, (ASAP), price below $x. Based on the upper ceiling for the bid price, the OpenFlow controller will allocate a flow for the immediate traffic of the corporation. For the ASAP flow, the corporate would have requested that the flow be arranged when the bid price falls between a range $a – $b. The OpenFlow Controller will ensure that it can arrange for such a flow. The last type of traffic which is not important it will be send during non-peak hours. This will require that the OpenFlow controller be able to allocate different flows dynamically based on winning the auction process that happens in this scheme. The current protocols of the internet of today namely RSVP, DiffServ allocate pipes based on the traffic type & class which is static once allocated. This strategy enables OpenFlow to dynamically adjust the traffic flows based on the current bid price prevailing in that part of the network.

The ability of the OpenFlow protocol to be able to dynamically allocate different flows will once and for all solve the problem of being able to monetize mobile and fixed line data.  Users can decide the type of service they are interested and choose appropriately. This will be a win-win for both the Service Providers and the consumer. The Service Provider will be able to get a ROI for the infrastructure based on the traffic flowing through his network. The consumer rather than paying a fixed access charge could have a smaller charge because of low bandwidth usage.

An auction-based internet is not just a possibility but would also be a worthwhile business model to pursue. The ability to route traffic dynamically based on an auction mechanism in the internet enables the internet infrastructure to be utilized optimally. It will serve the dual purpose of solving traffic congestion, as highest bidders will get the pipe but will also monetize data traffic based on its importance to the end user.

An auction based internet is a very distinct possibility in our future given the promise of the OpenFlow protocol.

All  thoughts, ideas or counter opinions are welcome!

Find me on Google+

Cloud Computing – Show me the money!

Published in Telecom Lead – Cloud Computing – Show me the money!

A lot has been said about the merits of cloud computing and how it is going to be the technological choice of most enterprises in the not so distant future. But the key question that is bound to keep cropping up in the higher echelons of the enterprise is whether the cloud makes good business sense. While most know that cloud computing adopts a pay-per-use model similar to regular utilities like electricity and water and does away with upfront infrastructure costs to the organization the nagging question to most senior management people is whether cloud computing is prudent choice in the long term.

This is not an easy question to answer and depends on a multitude of factors. The alternative to cloud computing is to have an in-house infrastructure of servers, hardware and software, software licenses, broadband links, firewalls etc. All these will form the Capital Expenditure (CAPEX) for the organization. In addition to these expenses will be the Operational Expenditures (OPEX) of real estate to house the equipment, power supply systems, cooling systems, maintenance personnel, annual maintenance contracts (AMC) etc which will be recurring expenses for the organization.

Cloud Computing does away completely with procurement of hardware, software, databases, licenses etc and an enterprise should be able to host their application in a couple of hours provided they know ahead of time the resources their application will need.

Hence as can be seen while the upfront costs and the running costs of maintaining a data center will be high in comparison to the zero upfront costs of the deploying on the cloud the steeper operational costs of the cloud will eventually catch up with the in-house infrastructure.

Depending on how well the application is designed the point at which the cumulative running costs of the cloud breaks even with in-house data center can be made to occur a couple of years down the line after the application is deployed.  Assuming that the break even happens in 3 years the advantage of cloud deployment is that the enterprise does not have to worry about equipment obsolescence, upgrading of software etc not to mention the depreciation of the equipment costs.

Moreover cloud technology is extremely useful to enterprises which are planning to deploy application in which there is difficulty in forecasting the type of traffic that will be hit their application. Where the traffic may be intermittent, bursty or seasonal then a cloud makes perfect business sense since can it scale up or scale down depending on the traffic.

Some typical applications which are prime candidates for the cloud are CRM software, office tools, testing tools, online retail stores, webmail etc.

One possible worry of the enterprise will be the security concerns while deploying to the public cloud. In such situations the organization can take a hybrid strategy where their sensitive data are hosted in in-house data centers and their main application is hosted on a public cloud.

Hence in most situation cloud deployments do have a definite edge for certain key application of the enterprise.

Find me on Google+

The Business of Cloud Computing

Cloud Computing is the spanking new paradigm in the world of computing. The key differentiator in this technology is that the enterprise only pays for the amount of resources used – be it CPUs, memory or databases. While it does away with Capital Expenditure for organizations by providing a utility model of pricing it results in recurring Operating Expenses for the organization. However the important thing is that the cloud grows and shrinks according to demand and hence the cost to the organization is dependent on the traffic it generates. While web based applications are prime candidates for the cloud other equally eligible candidates are batch processing jobs, nightly builds or CPU intensive analytics. Except for the case of web application, for other types of applications, a reasonable estimate can be made on the resources needed and appropriate choice be made on the cloud.

This article looks at web applications where the traffic on the site can be seasonal and can vary during periods of the day. Besides web sites should be capable of handling bursty traffic with enormous loads at particular intervals.

The important consideration for web sites is to ensure that the application is truly optimized and exhibits the property of scaling horizontally. While it appears that scaling out will occur for any reasonably designed application the issue is that as the number of hits increase on the web site the response time increases steeply but the number of transactions per second plateaus at some particular load level and does not increase after that. It can be said that for a certain CPU instance configuration the peak transaction per second will reach a particular limit and cannot be increased any further. However the cloud also provides a key component namely the load balancer along with auto scaling which create a new instances when this threshold is reached.

What are the business considerations that need to be taken while designing for the cloud?

One needs to be conservative in choosing the instance type. While larger instances will provide a better performance they also cost more. Hence the instance type should be large enough and no larger. It would be wasteful of using extremely large instances where the last instance only uses a part of the total traffic while costing a lot more.

The analogy is that if 16 units if task have to be performed it is better to have a small CPU instance capable of handling 3 units of task requiring a total of 6 CPUs (6 * 3 = 18 > 16) rather than having a large CPU instance capable of handling 5 units of task requiring a total of 4 large CPUs (5 * 4= 20> 16). The second option would result in a waste processing power.

Assuming that the upfront cost to the organization for hosting the website in-house is ‘P’ and the cost amortized over a period of 1 years is ‘p’ per hour. Further if the instance cost is ‘c’ and ‘n’ is number of instances needed to support the projected demand and the revenue to the organization hosting the website is ‘r’ per 1000 hits then a cloud deployment will make business sense when

(rh– n * ch) – ph > 0 where h is the hour

As long as the right hand side is positive the organization will profit. However as the traffic increases and the throughput of website plateaus the enterprise will hit a ‘window of diminishing returns’.

However if the performance of the application is poor and the number of instances needed to support the traffic is disproportionately large then the above equation will be negative and will result in loss to the organization.

(rh – n * ch) – ph < 0

Hence deployment to the cloud besides requiring a strong technical background also needs a sound business sense in order to reap the benefits of the cloud.

Find me on Google+

Cloud, analytics key tools for today’s telcos

Published in Telecom Asia Aug 20, 2010 – http://bit.ly/dxKbsR

Operators facing dwindling revenue from wireline subscribers, fierce tariff wars and exploding mobile data traffic are continually being pressured to do more for less. Spending on infrastructure is increasing as they look to provide better service within slender budgets.
In these tough times telcos have to devise new and innovative strategies and make judicious technology choices. Two promising technologies, cloud computing and analytics, are shaping up as among the best choices to make.
Cloud architecture does away with the worry of planning the computing resources needed, the real estate, the costs of the acquiring them and thoughts of its obsolescence. It allows the CSPs to purchase processing power, platforms and databases almost as a utility like electricity or water.
Cloud consumers only pay for what they use. The magic of this promising technology is the elasticity that the cloud provides – it expands to accommodate increasing demands and contracts when the demand drops.
The cloud architectures of Amazon, Google and Microsoft – currently the three biggest cloud providers – vary widely in their capabilities and features. These strengths and weaknesses should be taken into account while planning a cloud system. Each is best suited for only a certain class of applications unique to each individual cloud provider.
On one end of the spectrum Amazon’s EC2 (Elastic Compute Cloud) provides a virtual machine and a wealth of associated tools for storage and notifications. But the trade-off for increased flexibility is that users must take responsibility for designing resiliency into their systems.
On the other end is Google’s App Engine, a highly scalable cloud architecture that handles failures but is a lot more restrictive. Microsoft’s Azure is based on the .NET architecture and in terms of flexibility and features lies between these two.
When implementing such architecture, an organization should take a long hard look its computing software inventory to decide which applications are worthy of migrating to the cloud. The best candidates are processing intensive in-house applications that deliver standardized functionality and interface, and whose software architecture is made up of loosely coupled communicating systems.
Applications that deal with sensitive data should be retained within the organization’s internal computing infrastructure, because security is currently the most glaring issue with the cloud. Cloud providers do provide various levels of security to users, but this is an area in keen need of standardization.
But if the CSP decides to build components of an OSS system – rather than buying a pre-packaged system – it makes good business sense to develop for the cloud.
A cloud-based application must have a few essential properties. First, it is preferable if the application was designed on SOA principles. Second, it should be loosely coupled. And lastly, it needs to be an application that can be scaled rapidly up or down based on the varying demands.
The other question is which legacy systems can be migrated. If the OSS/BSS systems are based on commercial off-the-shelf systems these can be excluded, but an offline bill processing system, for example, is typically a good candidate for migration.
Mining wisdom from data
The cloud can serve as the perfect companion for another increasingly vital operational practice – data analytics. The cloud is capable of modeling large amounts of data, and running models to process and analyze this data. It is possible to run thousands of simultaneous instances on the cloud and mine for business intelligence in the oceans of telecom data operators generate.
Today’s CSP maintains software systems generating all kinds of customer data, covering areas ranging from billing and order management to POS, VAS and provisioning. But perhaps the largest and richest vein of subscriber information is the call detail records database.
All this data is worthless, though, if it cannot be mined and analyzed. Formal data mining and data analytics tools can be used to identify patterns and trends that will allow operators to make strategic, knowledge-driven decisions.
Analytics involves many complex areas like predictive analytics, neural nets, decision trees and classification. Some of the approaches used in data analytics include prediction, deviation detection, degree of influence and classification.
With the intelligence that comes through analytics it is possible to determine customer buying patterns, identify causes for churn and develop strategies to promote loyalty. Call patterns based on demography or time of day will enable the CSPs to create innovative tariff schemes.
Determining the relations and buying patterns of users will provide opportunities for up-selling and cross-selling. The ability to identify marked deviation in customer behavior patterns help the CSP in deciding ahead of time whether this trend is a warning bell or an opportunity waiting to be tapped.
Tinniam V Ganesh

Find me on Google+

The evolutionary road for the Indian Telecom Network

Published in Voice & Data Apr 14, 2010

Abstract: : In this era of technological inventions, with the plethora of technologies, platforms, paradigms, how should the India telecom network evolve? The evolutionary path for the telecom network clearly should be one that ensures both customer retention and growth while at the same time be also capable of handling the increasing demands on the network .The article below looks at some of the technologies that make the most sense in the current technological scenario The wireless tele-density in India has now reached 48% and is showing no signs of slowing down. The number of wireless users will only go up as the penetration moves farther into the rural hinterland. In these times Communication Service Providers (CSPs) are faced with a multitude of different competing technologies, frameworks and paradigms. On the telecom network side there is the 2G, 2.5G, 3G & 4G. To add to the confusion there is a lot of buzz around Cloud technology, Virtualization, SaaS, femtocells etc., to name a few. With the juggernaut of technological development proceeding at a relentless pace Senior Management in Telcos, Service Providers the world over are faced with a bewildering choice of technology to choose from while trying to maintain the spending at sustainable levels. For a developing economy like India the path forward for Telcos and CSP is to gradually evolve from the current 2.5G service to the faster 3G services without trying to rush to 4G. The focus of CSPs and Operators should be in customer retention and maintaining customer loyalty. The drive should be in increasing the customer base by providing superior customer experience rather than jumping onto the 4G bandwagon. 4G technology, for example LTE and WiMAX, make perfect sense in countries like US or Japan where smart phones are within the reach of a larger set of the populace. This is primarily due to popularity and affordability of these smart phones in countries like the US. In India smartphones, when they come, will be the sole preserve of high flying executives and the urban elite. The larger population in India would tend to use regular mobile phones for VAS services like mobile payment, e-ticketing rather than downloading video or watching live TV. In US, it is rumored, that iPhones with their data hungry applications almost brought a major network to its knees. Hence, in countries like US, it makes perfect sense for Network Providers to upgrade their network infrastructure to handle the increasing demand for data hungry applications. The upgradation to LTE or WiMAX would be a logical in countries like US. In our nation, with the growth in the number of subscribers, the thrust of Service Providers should be to promote customer loyalty by offering differentiated Value Added Service (VAS) service. The CSPs should try to increase the network coverage so that the frustration of lost or dropped calls is minimal and focus on providing superior customer experience. The Service Providers should try to attract new users by offering an enhanced customer experience through special Value Added Services (VAS). This becomes all the more important with the impending move to Mobile Number Portability (MNP). Once MNP is in the network many subscribers will switch to Service Providers who offer better services and have more reliable network coverage. Another technique by which Service Providers can attract and retain customers is through the creation of App Stores. In US, app stores for iPhone have spawned an entire industry. Mobile Apps from app stores besides providing entertainment and differentiation can also be a very good money spinner. While the economy continues to flounder the world over the Service Providers should try to reduce their Capacity Expenditure (Capex) and their Operating Expenditure (Opex) through the adoption of Software-as – Service (SaaS) for their OSS/BSS systems. Cloud technology, besides reducing the Total Cost of Ownership (TCO) for Network Providers can be quite economical in the long run. It is quite possible that prior to migrating to the Cloud all aspects of security should be thoroughly investigated by the Network Providers and critical decisions as to which areas of their OSS/BSS they would like to migrate to the Cloud. While a move to leapfrog to 4G from 2G may not be required, it is imperative that with the entry of smartphones like iPhone 3GS, Nexus One and Droid into India the CSPs should be in a position to handle increasing bandwidth requirements. Some techniques to handle the issue of data hungry smartphones are to offload data traffic to Wi-Fi networks or femtocells. Besides, professionals these days use dongles with their laptops to check email, browse and download documents. All these put a strain on the network and offloading data traffic to femtocells & Wi-Fi have been the chosen as the solution by leading Network Providers in the US. Conclusion So the road to gradual evolution of the network for the Network Operators, Service Providers are 1. Evolve to 3G Services from 2G/2.5G. 2. Create app stores to promote customer retention & loyalty and offer differentiated VAS services 3. Improve network coverage uniformly and enhance the customer experience through specialized App stores 4. Judiciously migrate some of the OSS/BSS functionality to the cloud or use SaaS after investigating the applications of the enterprise that can move to the cloud 5. Offload data traffic to Wi-Fi networks or femtocells.

Tinniam V. Ganesh

Find me on Google+