Designing for Cloud Worthiness

Cloud Computing is changing the rules of computing to the enterprise. Enterprises are no longer constrained by capital costs of upfront equipment purchase. Rather they can concentrate on the application and deploy it on the cloud and pay in a utility style based on usage. Cloud computing essentially presents a virtualized platform on which applications can be deployed.

The Cloud exhibits the property of elasticity by automatically adding more resources to the application as demand grows and shrinking the resources when the demand drops. It is this property of elasticity of the cloud and the ability to pay based on actual usage that makes Cloud Computing so alluring.

However to take full advantage of the Cloud the application must use the available cloud resources judiciously.  It is important for applications that are to be deployed on the cloud to have the property of scaling horizontally.  What this implies is that the application should be able to handle more transactions per second when more resources are added to application. For example if the application has been designed to run in a small CPU instance of 1.7GHz,32 bit and 160 GB of instance storage with a throughput of 800 transactions per second then one should be able to add 4 such instances and scale to handling 4000 transactions per second.

However there is a catch in this. How does one determine what should be theoretical limit of transactions per second for a single instance?  Ideally we should maximize the throughput and minimize the latency for each instance prior to going to the next step of adding more instances on the cloud. One should squeeze the maximum performance from the application in the instance of choice prior to using multiple instances on the cloud. Typical applications perform reasonably well under small loads but as the traffic is increased the response time increases and the throughput also starts dipping.

There is a need to run some profiling tools and remove bottlenecks in the application. The standard refrain for applications to be deployed on the cloud is that they should be loosely coupled and also be stateless. However, most applications tend to be multi-threaded with resource sharing in various modules.  The performance of the application because of locks and semaphores should be given due consideration. Typically a lot of time wasted in the wait state of threads in the application. A suitable technique should be used for providing concurrency among threads. The application should be analyzed whether it read-heavy and write-light or write-heavy and read-light. Suitable synchronization techniques like reader-Writer, message queue based exclusion or monitors should be used.

I have found callgrind for profiling and gathering performance characteristics along with KCachegrind for providing a graphical display of performance times extremely useful.

Another important technique to improve performance is the need to maintain in-memory cache of frequently accessed data. Rather than making frequent queries to the database periodic updates from the database need to be made and stored in in-memory cache. However while this technique works fine with a single instance the question of how to handle in-memory caches for multiple instances in the cloud represents quite a challenge. In the cloud when there are multiple instances there is a need for a distributed cache which is  shared among multiple instances. Memcached is appropriate technique for maintaining a distributed cache in the cloud.

Once the application has been ironed out for maximum performance the application can be deployed on the cloud and stress tested for peak loads.

Some good tools that can be used for generating loads on the application are loadUI and multi-mechanize. Personally I prefer multi-mechanize as it uses test scripts that are based on Python which can be easily modified for the testing. One can simulate browser functionality to some extent with Python in multi-mechanize which can prove useful.

Hence while the cloud provides CPUs, memory and database resources on demand the enterprise needs to design applications such that the use of these resources are done judiciously. Otherwise the enterprise will not be able to reap the benefits of utility computing if it deploys inefficient applications that hog a lot of resources without appropriate revenue generating performance.

INWARDi Technologies

Cloud Computing – Design Considerations

Cloud Computing is definitely turning out to be the proverbial carrot for enterprises to host their applications on the public cloud. The cloud promises many benefits to users of the cloud. Cloud Computing obviates the need for upfront capital expenses for computing infrastructure, real estate and maintenance personnel. This technology allows for scaling up or scaling down as demand on the application fluctuates.

While the advantages are many, migrating application onto the cloud is no trivial task.  The cloud is essentially composed of commodity servers. The cloud creates multiple instances of the application and runs it on the same or on different servers. The benefit of executing in parallel is that the same task can be completed faster. The cloud offers enterprises the ability to quickly scale to handle increasing demands,

But the process of deploying applications on to the cloud requires that the application be re architected to take advantage of this parallelism that the cloud provides. But the ability to handle parallelization is no simple task. The key attributes that need to be handled by distributed systems is the need for consistency and availability. If there are variables that need to be shared across the parallel instances then the application must make special provisions to handle this and ensure consistency. Similarly the application must be designed to handle failures.

Applications that are intended to be deployed on the cloud must be designed to scale-out rather than having the ability to scale-up. Scaling up refers to the process of adding more horse power by way of faster CPUs, more RAM and faster throughput.  But applications that need to be deployed on the cloud need to have the ability to scale out or scale horizontally where more servers are added without any change in processing horsepower.  The design for horizontal scalability is the key to cloud computing architectures.

Some of the key principles to keep in mind while designing for the cloud is to ensure that the application is composed of loosely coupled processes preferably based on SOA principles.  While a multi-threaded architecture where resource sharing through mutexes works in monolithic applications such a architecture is of no help when there are multiple instances of the same application running on different servers. How does one maintain consistency of the shared resource across instances?  This is a tough problem to solve. Ideally the application should be thread safe and should be based on a shared – nothing kind of architecture. One such technique is to use queues that the cloud provides as a means of sharing across instances. However this may impact the performance of the system.  Other methods include using ‘memcached’ which has been used successfully by Facebook, Twitter, Livejournal, Zynga etc deployed on the cloud. Still another method is to use the Map-Reduce algorithm where the variables across instances are handled by ‘map’ and the ‘reduce’ part handles the consistency across instances.

Another key consideration is the need to support availability requirements. Since the cloud is made up of commodity hardware there is every possibility of servers failing.  The application must be designed with inbuilt resilience to handle such failures. This could by designing active-standby architecture or by providing for checkpointing so that application can restart from some known previous point.

Hence while cloud computing is the way to go in the future there is a need to be able to carefully design the application so that full advantage of the cloud can be taken.

The evolutionary road for the Indian Telecom Network

Published in Voice & Data Apr 14, 2010

Abstract: : In this era of technological inventions, with the plethora of technologies, platforms, paradigms, how should the India telecom network evolve? The evolutionary path for the telecom network clearly should be one that ensures both customer retention and growth while at the same time be also capable of handling the increasing demands on the network .The article below looks at some of the technologies that make the most sense in the current technological scenario The wireless tele-density in India has now reached 48% and is showing no signs of slowing down. The number of wireless users will only go up as the penetration moves farther into the rural hinterland. In these times Communication Service Providers (CSPs) are faced with a multitude of different competing technologies, frameworks and paradigms. On the telecom network side there is the 2G, 2.5G, 3G & 4G. To add to the confusion there is a lot of buzz around Cloud technology, Virtualization, SaaS, femtocells etc., to name a few. With the juggernaut of technological development proceeding at a relentless pace Senior Management in Telcos, Service Providers the world over are faced with a bewildering choice of technology to choose from while trying to maintain the spending at sustainable levels. For a developing economy like India the path forward for Telcos and CSP is to gradually evolve from the current 2.5G service to the faster 3G services without trying to rush to 4G. The focus of CSPs and Operators should be in customer retention and maintaining customer loyalty. The drive should be in increasing the customer base by providing superior customer experience rather than jumping onto the 4G bandwagon. 4G technology, for example LTE and WiMAX, make perfect sense in countries like US or Japan where smart phones are within the reach of a larger set of the populace. This is primarily due to popularity and affordability of these smart phones in countries like the US. In India smartphones, when they come, will be the sole preserve of high flying executives and the urban elite. The larger population in India would tend to use regular mobile phones for VAS services like mobile payment, e-ticketing rather than downloading video or watching live TV. In US, it is rumored, that iPhones with their data hungry applications almost brought a major network to its knees. Hence, in countries like US, it makes perfect sense for Network Providers to upgrade their network infrastructure to handle the increasing demand for data hungry applications. The upgradation to LTE or WiMAX would be a logical in countries like US. In our nation, with the growth in the number of subscribers, the thrust of Service Providers should be to promote customer loyalty by offering differentiated Value Added Service (VAS) service. The CSPs should try to increase the network coverage so that the frustration of lost or dropped calls is minimal and focus on providing superior customer experience. The Service Providers should try to attract new users by offering an enhanced customer experience through special Value Added Services (VAS). This becomes all the more important with the impending move to Mobile Number Portability (MNP). Once MNP is in the network many subscribers will switch to Service Providers who offer better services and have more reliable network coverage. Another technique by which Service Providers can attract and retain customers is through the creation of App Stores. In US, app stores for iPhone have spawned an entire industry. Mobile Apps from app stores besides providing entertainment and differentiation can also be a very good money spinner. While the economy continues to flounder the world over the Service Providers should try to reduce their Capacity Expenditure (Capex) and their Operating Expenditure (Opex) through the adoption of Software-as – Service (SaaS) for their OSS/BSS systems. Cloud technology, besides reducing the Total Cost of Ownership (TCO) for Network Providers can be quite economical in the long run. It is quite possible that prior to migrating to the Cloud all aspects of security should be thoroughly investigated by the Network Providers and critical decisions as to which areas of their OSS/BSS they would like to migrate to the Cloud. While a move to leapfrog to 4G from 2G may not be required, it is imperative that with the entry of smartphones like iPhone 3GS, Nexus One and Droid into India the CSPs should be in a position to handle increasing bandwidth requirements. Some techniques to handle the issue of data hungry smartphones are to offload data traffic to Wi-Fi networks or femtocells. Besides, professionals these days use dongles with their laptops to check email, browse and download documents. All these put a strain on the network and offloading data traffic to femtocells & Wi-Fi have been the chosen as the solution by leading Network Providers in the US. Conclusion So the road to gradual evolution of the network for the Network Operators, Service Providers are 1. Evolve to 3G Services from 2G/2.5G. 2. Create app stores to promote customer retention & loyalty and offer differentiated VAS services 3. Improve network coverage uniformly and enhance the customer experience through specialized App stores 4. Judiciously migrate some of the OSS/BSS functionality to the cloud or use SaaS after investigating the applications of the enterprise that can move to the cloud 5. Offload data traffic to Wi-Fi networks or femtocells.

Tinniam V. Ganesh

Find me on Google+