Managing Multi-Region Deployments

If there is one lesson from this year’s major Amazon’s EC2 outage it is “don’t deploy all your application instances in a single region”. The outage has clearly demonstrated that entire regions are not immune to disasters. Thus, it has become imperative for designers and architects to deploy applications spanning major regions. Currently there are 4 major regions – US-West, US-East, Europe and APAC.

Both fundamentally and from a strategic point of view it makes sense to deploy web applications in different regions for e.g. both in US-East and US-West. This will build into the application a certain amount of geographical resiliency . In this way you are protected from major debacles like the Amazon’s EC2 outage in April 2011 or a possible meteor crashing and burning in one of the data centers.

Deploying instances in different regions is almost like minimizing risk by diversifying your portfolio. The design of application besides including other methods of fault tolerance should also incorporate geographical resilience.

Currently Amazon’s ELB does not support load balancing across regions. The ELB can only distribute traffic among instances in different availability zones of a region. The solution is to go for other DNS services like UltraDNS, DNSMadeEasy or DynDNS.

These DNS services provide geoIP based load balancer that can distribute traffic based on the region from which it originated. Currently there are 4 major regions in the world – US-East, US-West, Europe and APAC. GeoIP based traffic distribution besides balancing the load based on origination also has the added benefit of getting to the application closest to the origination thus reducing latencies.

The GeoIP based traffic distributor can distribute traffic to the closest region. An Amazon’s ELB can then internally distribute the traffic among the instances within that region. For a look at some typical problems in multi-region cloud deployments do look at my post “Cache-22

INWARDi Technologies

Deploying across regions

Find me on Google+

Designing a Scalable Architecture for the Cloud

The promise of the cloud is the unlimited computing power and storage capacities coupled with the pay-per-use policy. This makes the cloud particularly irresistible for hosting web applications and applications whose demand vary periodically. In order to take full advantage of the cloud the application must be designed for optimum performance. Though the cloud provides resources on-demand a badly designed application can hog resources and prove to be extremely expensive in the long run.

One of the first requirements for deploying applications on the cloud is that it should be scalable. Scalability denotes the ability to handle increasing traffic simply by adding more computing resources of the same kind rather than adding resources with greater horse power. This is also referred to scaling horizontally.

Assuming that the application has been sufficiently profiled and tuned for high performance there are certain key considerations that need to be taken into account while deploying on the cloud – public or private.  Some of them are being able to scale on demand, providing for high availability, resiliency and having sufficient safeguards against failures.

Given these requirements a scalable design for the Cloud can be viewed as being made up of the following 5 tiers of layers

The DNS tier – In this tier the user domain is hosted on a DNS service like Ultra DNS or Route 53. These DNS services distribute the DNS lookups geographically. This results in connecting to a DNS Server that is geographically closer to the user thus speeding the DNS lookup times. Moreover since the DNS lookups are distributed geographically it also builds geographic resiliency as far as DNS lookups are concerned

Load Balancer-Auto Scaling Tier – This tier is responsible for balancing the incoming traffic among compute instances in the cloud. The load balancing may be made on a simple round-robin technique or may be based on the actual CPU utilization of the individual instances. Typically at this layer we should also have an auto-scaling policy which will add more instances if the traffic to the application increases above a threshold or terminate instances when the traffic falls below a specific threshold.

Compute-Instance Tier – This layer hosts the actual application in individual compute instances on the cloud. It is assumed that the application has been tuned for maximum performance. The choice of small, medium or large CPU should be based on the traffic handling capacity of the instance type versus the cost/hr of the instance.

Cache Tier – This is an important layer in the cloud application where there are multiple instances. The cache tier provides a distributed cache for all the instances. With a distributed caching system like memcached it is possible to share global data between instances. The memcached application uses a consistent-hashing technique to distribute data among a set of participating servers. The consistent hashing method allow for handling of server crashes and new servers joining into the cache layer.

Database Tier – The Database tier is one of the most critical layers of the application. At a minimum the database should be configured in an active-standby mode. Ideally it is always better to have the active and standby in different availability zones to better handle disasters in a particular zone. Another consideration is have separate read replicas that handle reads to database while the primary database handles the write operations

Besides the above considerations it is always good to host the web application in different availability zone thus safeguarding against disasters in a particular region.

The Many Faces of Latency

Nothing is more damaging to a website than poor response times. Latency is probably the most serious issue that website application developers have to contend with. Whether it is retail application or a e-ticketing application poor response times play havoc on user experience. Latency has many faces each contributing in a little way to the overall response times of the application. This article looks at some of the key culprits that contribute to a website latency

Link Latencies: This is one of major contributors. The link speeds from the host computer to the website plays a major role. For those applications that are hosted on the public cloud it makes sense to deploy in multiple availability zones dispersed geographically. This will ensure that people across the globe get to the website from a cloud deployment closest to them. Besides, with the recent Amazon EC2 outage it definitely makes sense to be able to deploy across availability zones promoting geographical resiliency in the application. Dispersing the applications geographically helps in connecting the user with the least number of intervening hops thus reducing the response times.

DNS latencies: This is another area which needs to be focused on. DNS lookup can be fairly expensive. Hence it makes sense to speed DNS lookups by using some DNS services that provide additional name servers across geographical regions. There are many such DNS services that speed DNS lookups by propagating DNS lookup across geographies. Some examples are Amazon’s Route 53, UltraDNS etc.

Load Balancer Latencies: Typical cloud deployments will multiple instances usually be behind a load balancer. Depending on what algorithm the load balancer adopts for balancing the incoming traffic it is definitely going to contribute to the latency. Amazon’s Elastic Load Balancer is usually a set of participating IPs.

Application Latencies: When the load balancer sends the request to the Web application the logic in processing the request is a key contributor. This latency is within the control of the developer so it makes sense to bring this down to the absolute minimum.

Web page Rendering Latencies: A poorly designed web page can also result in large latencies. A webpage that needs to download a lot of items prior to being able to render it will definitely affect the user’s experience. Hence it is necessary to design an efficient web page that renders quickly. A standard technique to deliver content to a website is to use a Content Delivery Network (CDN) to deliver content. CDNs typically distribute content across multiple servers dispersed geographically. The content server selected for content delivery is based on user proximity based on the fewest number of hops. Major players in CDNS are Akamai, Edgecast andAmazon’s Cloudfront.

These are the many aspects that contribute to overall latencies. Focus should being trying to optimize in all areas while deploying a web application either in a hosted network or the public cloud.

Find me on Google+