Bend it like Bluemix, MongoDB with auto-scaling – Part 3

In this last post of this series, I test the performance of Bluemix & MongoDB against  concurrent queries and deletes to the cloud based app with Mongo DB, with auto-scaling on. Before I started these series of tests I moved the Overload policy a couple of notches higher and made it scale out if  memory utilization > 75% for 120 secs and < 30% for 120 secs (from the earlier 55% memory utilization) as shown below.

27

The code for bluemixMongo app can be forked from Devops at bluemixMongo or can be cloned from GitHub at  bluemix-mongo-autoscale. The multi-mechanize scripts can be downloaded from GitHub at multi-mechanize     Before starting the testing I checked the current number of documents inserted by the concurrent inserts (see Bend it like Bluemix., MongoDB using Auto-scaling – Part 2). The total number as determined by checking the logs was 1380 17   Sure enough with the scaling policy change after 2 minutes the number of instanced dropped from 3 to 2

18

1. Querying the bluemixMongo app with Multi-mechanize

The Multi-mechanize Python script used for querying the bluemixMongo app simply invokes the app’s userlist URL (resp=br.open(“http://bluemixmongo.mybluemix.net/userlist/&#8221;)

v_user.py

def run(self):
# create a Browser instance
br = mechanize.Browser()
# don"t bother with robots.txt
br.set_handle_robots(False)
# start the timer
start_timer = time.time()
#print("Display userlist")
# Display 5 random documents
resp=br.open("http://bluemixmongo.mybluemix.net/userlist/")
assert("Example Mongo Page" in resp.get_data())
# stop the timer
latency = time.time() - start_timer
self.custom_timers["Userlist"] = latency
r = random.uniform(1, 2)
time.sleep(r)
self.custom_timers['Example_Timer'] = r

The configuration setup for this script creates 2 sets of 10 concurrent threads

config.cfg
run_time = 300
rampup = 0
results_ts_interval = 10
progress_bar = on
console_logging = off
xml_report = off
[user_group-1]
threads = 10
script = v_user.py
[user_group-2]
threads = 10
script = v_user.py

The corresponding userlist.js for querying the app is shown below. Here the query is constructed by creating a ‘RegularExpression’ with  a random Firstname, consisting of a random letter and a random number. Also the query is also limited to 5 documents.

function(callback)
{
// Display a random set of 5 records based on a regular expression made with random letter, number
var randnum = Math.floor((Math.random() * 10) + 1);
var alpha = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','X','Y','Z'];
var randletter = alpha[Math.floor(Math.random() * alpha.length)];
var val =  randletter + ".*" + randnum + ".*";
// Limit the display to 5 documents
var results = collection.find({"FirstName": new RegExp(val)}).limit(5).toArray(function(err, items){
if(err) {
console.log(err + " Error getting items for display");
}
else {
res.render('userlist',
{ "userlist" : items
}); // end res.render
} //end else
db.close(); // Ensure that the open connection is closed
}); // end toArray function
callback(null, 'two');
}

2. Running the userlist query

The following screenshot shows the userlist query being executed concurrently with Multi-mechanize. Note that the number of  instances also drops down to 1

18

3. Deleting documents with Multi-mechanize

The multi-mechanize script for deleting a document is shown below. This script calls the URL with resp = br.open(“http://bluemixmongo.mybluemix.net/remuser&#8221;). No values are required to be entered into the form and the ‘submit’ is simulated.

v_user.py
def run(self):
# create a Browser instance
br = mechanize.Browser()
# don"t bother with robots.txt
br.set_handle_robots(False)
br.addheaders = [("User-agent", "Mozilla/5.0Compatible")]
# start the timer
start_timer = time.time()
# submit the request
resp = br.open("http://bluemixmongo.mybluemix.net/remuser")
#resp = br.open("http://localhost:3000/remuser")
resp.read()
# stop the timer
latency = time.time() - start_timer
# store the custom timer
self.custom_timers["Load_Front_Page"] = latency
# think-time
time.sleep(2)
# select first (zero-based) form on page
br.select_form(nr=0)
# set form field
br.form["firstname"] = ""
br.form["lastname"] = ""
br.form["mobile"] = ""
# start the timer
start_timer = time.time()
# submit the form
resp = br.submit()
resp.read()
print("Removed")
# stop the timer
latency = time.time() - start_timer
# store the custom timer
self.custom_timers["Delete"] = latency
# think-time
time.sleep(2)

config.cfg

The config file is set to start 2 sets of 10 concurrent threads and execute for 300 secs

[global]
run_time = 300
rampup = 0
results_ts_interval = 10
progress_bar = on
console_logging = off
xml_report = off
[user_group-1]
threads = 10
script = v_user.py
[user_group-2]
threads = 10
script = v_user.py
;

deleteuser.js

This Node.js script does a findOne() document and does a remove with the ‘justOne’ set to true

collection.findOne(function(err, item) {
// Delete just a single record
collection.remove(item, {justOne:true},(function (err, doc) {
if (err) {
// If it failed, return error
res.send("There was a problem removing the information to the database.");
}
else {
// If it worked redirect to userlist
res.location("userlist");
// And forward to success page
res.redirect("userlist");
}
}));
});
collection.find().toArray(function(err, items) {
console.log("Length =----------------" + items.length);
db.close();
});
callback(null, 'two');

4. Running the deleteuser multimechanize script

The output of the script executing and the reduction of the number of instances because of the change in the memory utilization policy is shown

21

5. Multi-mechanize

As mentioned in the previous posts

The multi-mechanize commands are executed as follows
To create a new project
multimech-newproject.exe userlist
This will create 2 folders a) results b) test_scripts and the file c) config.cfg. The v_user.py needs to be updated as required

To run the script
multimech-run.exe userlist

The details of the response times for the query is shown below .

All_Transactions_response_times_intervals

 

More details on latency and throughput for the queries and the deletes are included in the results folder of multi-mechanize    

6. Autoscaling The details of the auto-scaling service is shown below

a. Scaling Metric Statistics

22

b. Scaling history 23

7. Monitoring and Analytics (M & A) The output from M & A is shown  below

 

a. Performance Monitoring 24  

b. Log Analysis output The log analysis give a detailed report on the calls made to the app, the console log output and other useful information

25

The series of the 3 posts Bend it like Bluemix, MongoDB with auto-scaling demonstrated the ability of the cloud to expand and shrink based on the load on the cloud.An important requirement for Cloud Architects is design applications that can  scale horizontally without impacting the performance while keeping the costs optimum. The real challenge to auto-scaling is the need to make the application really distributed as opposed to the monolithic architectures we are generally used to.   I hope to write another post on creating simple distributed application later.

Hasta la Vista!

Also see
1.  Bend it like Bluemix, MongoDB with autoscaling – Part 1
2. Bend it like Bluemix, MongoDB with autoscaling – Part 2

You may like :
a) Latency, throughput implications for the cloud
b) The many faces of latency
c) Brewing a potion with Bluemix, PostgreSQL & Node.js in the cloud
d)  A Bluemix recipe with MongoDB and Node.js
e)Spicing up IBM Bluemix with MongoDB and NodeExpress
f) Rock N’ Roll with Bluemix, Cloudant & NodeExpress

Disclaimer: This article represents the author’s viewpoint only and doesn’t necessarily represent IBM’s positions, strategies or opinions

Bend it like Bluemix, MongoDB using Auto-scale – Part 1!

In the next series of posts I turn on the heat on my cloud deployment in IBM Bluemix and check out the elastic nature of this PaaS offering. Handling traffic load and elastically expanding and contracting is what the cloud does best. This is  where the ‘rubber really meets the road”. In this series of posts I generate the traffic load using Multi –Mechanize a performance test framework created by Corey Goldberg.

This post is based on an earlier cloud app that I created on Bluemix namely Spicing up a IBM Bluemix Cloud app with MongoDB and NodeExpress. I had to make changes to this code to iron out issues while handling concurrent  inserts, displays and deletes issued from the multi-mechanize tool and also to manage the asynchronous nightmare of Nodejs.

The code for this Bluemix, MongoDB with Auto-scaling can be forked  from Devops at bluemixMongo. The code can also be cloned from GitHub at bluemix-mongo-autoscale

1.  To get started, fork the code from Devops at bluemixMongo. Then change the host name in manifest.yml to something unique and click the Build and Deploy button on the top right in the page.

26

1a.  Alternatively the code can be cloned from GitHub at bluemix-mongo-autoscale. From the directory where the code is cloned push the code using Cloud Foundry’s cf command as follows

cf login -a https://api.ng.bluemix.net

cf push bluemixMongo –p . –m 128M

2. Now add the MongoDB service and click ‘OK’ to restage the server.

3

3. Add the Monitoring and Analytics (M & A) and also the Auto-scaling service. The M& A gives a good report on the Availability, Performance logging, and also provides Logging Analysis. The Auto-scaling service is the service that allows the app to expand elastically to changing traffic loads.

4

4. You should see the bluemixMongo app running with 3 services MongoDB, Autoscaling and M&A

5

5. You should now be able click the bluemixMongo.mybluemix.net and check the application out.

6.Now you configure the Overload Policy (auto scaling) policy. This is a slightly contrived example and the scaling policy is set to scale up if the Memory exceeds 55%. (Typically the scale up would be configured for > 80% memory usage)

6

7. Now check the configured Auto-scaling policy

7

8. Change the Memory Quota as appropriate. In my case I have kept the memory quota as 128 MB. Note the available memory is 640 MB and hence allows up to 5 instances. (By the way it is also possible to set any other value like 100 MB).

5

9. Click the Monitoring and Analytics service and take a look at the output in the different tabs

8

10. Next you need to set up the Performance test tool – Multi mechanize. Multi-mechanize creates concurrent threads to generate the load on a Web site or service. It is based on Python which  makes it easy to modify the scripts for hitting a website, making a REST call or submitting a form.

To setup Multi-mechanize you also need additional packages like numpy  matplotlib etc as the tool generates traffic based on a user provided script, measures latency and throughput besides also generating graphs for these.

For a detailed steps for setup of Multi mechanize please follow the steps in Trying out multi-mechanize web performance and load testing framework. Note: I would suggest that you install Python 2.7.2 and not the later 3.x version as some of the packages are based on the 2.7 version which has a slightly different syntax for certain Python statements

In the next post I will run a traffic test on the bluemixMongo application using Multi-mechanize and observe how the cloud app responds to the load.

Watch this space!
Also see
Bend it like Bluemix, MongoDB with autoscaling – Part 2!
Bend it like Bluemix, MongoDB with autoscaling – Part 3

You may like :
a) Latency, throughput implications for the cloud
b) The many faces of latency
c) Brewing a potion with Bluemix, PostgreSQL & Node.js in the cloud
d)  A Bluemix recipe with MongoDB and Node.js
e)Spicing up IBM Bluemix with MongoDB and NodeExpress
f) Rock N’ Roll with Bluemix, Cloudant & NodeExpress

Disclaimer: This article represents the author’s viewpoint only and doesn’t necessarily represent IBM’s positions, strategies or opinions

Towards an auction-based Internet

The post below was quoted and discussed extensively in (see the link) GigaOM, 14 Jan 2011 – Software Defined Networks could create an auction-based bazaar.

Published in Telecom Asia, Jan 13,2012 – Towards an auction-based internet

Are we headed to an auction-based Internet? This train of thought (no pun intended), which struck me while I was travelling from Chennai to Bangalorelast evening, was the result of the synthesis  of different ideas and technologies which I had read  in the recent past.

The current state of technology and the technology trends do seem to indicate such a possibility.  An auction-based internet would be a business model in which bandwidth would be allocated to different data traffic on the internet based on dynamic bidding by different network elements. Such an eventuality is a distinct possibility considering the economics and latencies involved in data transfer, the evolution of the smart grid concept and the emergence of the promising technology known as the OpenFlow protocol.  This is further elaborated below

Firstly, in the book “Grids, cloud and virtualization”, by Massimo Caforo and Giovanni Aloisio, the authors highlight a typical problem of the computing infrastructure of today. In the book, the authors contend that a key issue in large scale computing is data affinity, which is the result of the dual issues of data latency and the economics of data transfer. They quote, Jim Gray (Turing award in 1998) whose paper on “Distributed Computing Economics” states that that programs need to be migrated to the data on which they operate rather than transferring large amounts of data to the programs.  This is in fact used in the Hadoop paradigm, where the principle of locality is maintained by keeping the programs close to the data on which they operate.

The book highlights another interesting fact. It says “cheapest and fastest way to move a Terabyte cross country is sneakernet (i.e. the transfer of electronic information, especially computer files, by physically carrying removable media such as magnetic tape, compact discs, DVDs, USB flash drives, or external drives from one computer to another). Google used sneakernet to transfer 120 TB of data. The SETI@home also used sneakernet to transfer data recorded by their telescopes inArecibo, Puerto Rico stored in magnetic tapes toBerkeley,California.

It is now a well known fact that mobile and fixed line data has virtually exploded clogging the internet. YouTube, video downloads and other streaming data choke the data pipes of the internet and Service Providers have not found a good way to monetize this data explosion. While there has been a tremendous advancement in CPU processing power (CPU horsepower in the range of petaflops) and enormous increases in storage capacity(of the order of petabytes) coupled with dropping prices,  there has been no corresponding drop in bandwidth prices in relation to the bandwidth capacity.

Secondly, in the book “Hot, flat and crowded” Thomas L. Friedman  describes the “Smart Homes” of the future in which all the home appliances will have sensors and will participate in the energy auction in real time as a part of the Smart Grid.  The price of energy in the Energy Grid fluctuates like stock prices since enterprises are bidding for energy during the day. In his Smart Home, Friedman envisions a situation in which the washing machine will turn on during off-peak hours when the prices of energy in the energy grid is low. In this way all the appliances in the homes of the future will minimize energy consumption by adjusting the cycles accordingly.

Why could not the internet also behave in a similar fashion? The internet pipes get crowded at different periods of the day, during seasons and during popular sporting events. Why cannot we have an intelligent network in place in which price of different data transfer rates vary depending on the time of the day, the type of traffic and the quality of service required.  Could the internet be based on an auction-mechanism in which different devices bid for bandwidth based on the urgency, speed and quality of services required? Is this possible with the routers, switches of today?

The answer is yes. This can be achieved by the new, path breaking innovation known as Software Defined Networks (SDNs) based on the OpenFlow protocol. SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate.  Do read my post Software Defined Networks : A glimpse of tomorrow   for a more detailed look at SDNs. SDNs can be made to dynamically route traffic flows based on decisions in real time.  The flow of data packets through the network can be controlled in a programmatic manner through the OpenFlow protocol. In order to dynamically allocate smaller or fatter pipes for different flows, it necessary for the logic in the Flow Controller to be updated dynamically based on the bid price.

For e.g. we could assume that a corporate has 3 different flows namely, immediate, (ASAP), price below $x. Based on the upper ceiling for the bid price, the OpenFlow controller will allocate a flow for the immediate traffic of the corporation. For the ASAP flow, the corporate would have requested that the flow be arranged when the bid price falls between a range $a – $b. The OpenFlow Controller will ensure that it can arrange for such a flow. The last type of traffic which is not important it will be send during non-peak hours. This will require that the OpenFlow controller be able to allocate different flows dynamically based on winning the auction process that happens in this scheme. The current protocols of the internet of today namely RSVP, DiffServ allocate pipes based on the traffic type & class which is static once allocated. This strategy enables OpenFlow to dynamically adjust the traffic flows based on the current bid price prevailing in that part of the network.

The ability of the OpenFlow protocol to be able to dynamically allocate different flows will once and for all solve the problem of being able to monetize mobile and fixed line data.  Users can decide the type of service they are interested and choose appropriately. This will be a win-win for both the Service Providers and the consumer. The Service Provider will be able to get a ROI for the infrastructure based on the traffic flowing through his network. The consumer rather than paying a fixed access charge could have a smaller charge because of low bandwidth usage.

An auction-based internet is not just a possibility but would also be a worthwhile business model to pursue. The ability to route traffic dynamically based on an auction mechanism in the internet enables the internet infrastructure to be utilized optimally. It will serve the dual purpose of solving traffic congestion, as highest bidders will get the pipe but will also monetize data traffic based on its importance to the end user.

An auction based internet is a very distinct possibility in our future given the promise of the OpenFlow protocol.

All  thoughts, ideas or counter opinions are welcome!

Find me on Google+

Profiting from a cloud deployment

Cloud computing does offer enterprises and organizations a mixed bag of goodies. For one it provides for a utility style computing, the ability to grow and shrink with changing loads, zero upfront costs etc. The benefits of cloud computing are many but does it all add up to profit for an enterprise? That is the critical question that needs to be answered.

This post will take a look on what it takes for a cloud deployment to be profitable for an organization.

The critical parameters for any web application are latency and throughput.  A well designed web application whether it is an e-retail site or an ad serving application will try to minimize the latency or response time while at the same time maximizing the throughput of the application. For any application while the latency can be kept within specified limits the throughput will tend to plateau at a certain level and will not increase with increasing traffic. Utilizing a larger instance can improve the throughput plateau slightly. In any case the reality is that throughput tends to flatten as the traffic is increased.

A typical cloud application will be made of several compute instances, database instances, DNS services etc. Cloud usage is billed by the hour. Hence we can represent the cost of a cloud deployment as follows

Cost (cloud deployment) = m * compute instance + n * database instance + o * network bytes + P

Where P = cost of DNS + Elastic IPs + other costs.

This can be represented by the formula

C = a * D * t

where C = cost of cloud deployment

D = costs per hour of the deployment

and ‘a’ is some arbitrary constant and ‘t’ is the time

Let us assume that for the cloud deployment we get a throughput of T.

The revenue for a web application whether it is an e-commerce site, an e-ticketing site or an ad serving engine will all depend on the throughput i.e. larger the throughput, larger the revenue and hence profit. We can then say that ‘R’ the revenue is

R (revenue) α k * T * t

In others words  the revenue is proportional to the throughput.

Hence to determine the profitability of a particular cloud deployment we need to compare the cost of the deployment for a given throughput versus a projected  profit margin. As long the cost of the deployment is less than the revenue arising from the throughput, the deployment will be profitable.  This can be represented pictorially as below.

The graph clearly shows that for a profitable deployment

d/dt (k * T *t) > d/dt (a * D * t) or

k * T > a * D

Hence as can seen from the picture as long as the slope of the cumulative deployment costs are less that the slope of the revenue the deployment will be profitable.

Find me on Google+

The Many Faces of Latency

Nothing is more damaging to a website than poor response times. Latency is probably the most serious issue that website application developers have to contend with. Whether it is retail application or a e-ticketing application poor response times play havoc on user experience. Latency has many faces each contributing in a little way to the overall response times of the application. This article looks at some of the key culprits that contribute to a website latency

Link Latencies: This is one of major contributors. The link speeds from the host computer to the website plays a major role. For those applications that are hosted on the public cloud it makes sense to deploy in multiple availability zones dispersed geographically. This will ensure that people across the globe get to the website from a cloud deployment closest to them. Besides, with the recent Amazon EC2 outage it definitely makes sense to be able to deploy across availability zones promoting geographical resiliency in the application. Dispersing the applications geographically helps in connecting the user with the least number of intervening hops thus reducing the response times.

DNS latencies: This is another area which needs to be focused on. DNS lookup can be fairly expensive. Hence it makes sense to speed DNS lookups by using some DNS services that provide additional name servers across geographical regions. There are many such DNS services that speed DNS lookups by propagating DNS lookup across geographies. Some examples are Amazon’s Route 53, UltraDNS etc.

Load Balancer Latencies: Typical cloud deployments will multiple instances usually be behind a load balancer. Depending on what algorithm the load balancer adopts for balancing the incoming traffic it is definitely going to contribute to the latency. Amazon’s Elastic Load Balancer is usually a set of participating IPs.

Application Latencies: When the load balancer sends the request to the Web application the logic in processing the request is a key contributor. This latency is within the control of the developer so it makes sense to bring this down to the absolute minimum.

Web page Rendering Latencies: A poorly designed web page can also result in large latencies. A webpage that needs to download a lot of items prior to being able to render it will definitely affect the user’s experience. Hence it is necessary to design an efficient web page that renders quickly. A standard technique to deliver content to a website is to use a Content Delivery Network (CDN) to deliver content. CDNs typically distribute content across multiple servers dispersed geographically. The content server selected for content delivery is based on user proximity based on the fewest number of hops. Major players in CDNS are Akamai, Edgecast andAmazon’s Cloudfront.

These are the many aspects that contribute to overall latencies. Focus should being trying to optimize in all areas while deploying a web application either in a hosted network or the public cloud.

Find me on Google+

Latency, throughput implications for the Cloud

The key considerations for any website are latency and throughput. These two parameters are extremely important to web designers as the response time of the web site and the ability to handle large amounts of traffic are directly related to the user experience and the loyalty of returning users.

What are these two parameters and why are they significant? Before looking at latency we need to understand what the response time of the web application is. Ideally this could be defined as the time between the receipt of the HTTP request and the emitting of the corresponding response. Unfortunately any web site hosted on the World Wide Web adds a lot more delay than the response time. This delay comes as the latency of the web site and is primarily due to the propagation and transmission delays on the internet. There are many contributors to this latency starting from the DNS lookup, to the link bandwidth etc.

Throughput on the other hand represents the maximum simultaneous queries or transactions per second that the web application is capable of handling. This is usually measured as transactions-per-second (tps) or queries-per-second (qps).

A good way to understand response time and throughput is to use a oft used example, of a retail store handling customers.  Assuming that there are 5 counter clerks who take 1 minute to check out a customer  we can readily see that as the number of customers to the store increases the throughput increases from 1 customer/minute to a maximum of 5 customers/minute.  Since the cashiers are able to process in 1 minute the response time for the customer is 1 minute/customer. Assuming a 6th customer enters and needs to checkout he/she will have to wait, for e.g.1 minute, if the 5 counter clerks are busy processing 5 other clients,. Hence the response time for the customer will be 1 minute (waiting) + 1 minute (servicing) = 2 minute. The response time increases from 1 minute to 2 minute.  If further clients are ready to check out the length of the wait in the queue will increase and hence the response time. Clearly the throughput cannot increase beyond 5 customers/minute while the response time will increase non-linearly as the clients enter the store faster than they can checked out by the counter clerks.

This is precisely the behavior of web applications. When the traffic to a web site is increased the throughput increases linearly and finally reaches a throughput “plateau”. After this point as the load is increased the throughput remains saturated at this level.  While on the other hand the response time is low at low traffic  it starts to increase non-linearly with increasing load and continues to increase as it maxes out  system resources like the CPU and memory.

When deploying applications on the cloud the latency and throughput are key considerations which are needed to determine the kind of computing resources that  are needed in  the cloud.  Assuming the web application has been optimized and performance tuned for optimum performance what needs to be done is run load testing of the application on the cloud using different CPU instances. For example assume that application is load tested on a small CPU instance.  We need to get the response times and throughput plots with increasing loads. Similarly we now need to deploy the web application on a medium instance and plot response times and the throughput plateaus on the medium instances.

Now the choice as to whether to go for a small CPU instance or medium CPU instance can be calculated as follows. Assuming that the requirements of the web application is to have a response time of ‘t’ seconds then we determine the corresponding traffic handling capacity , for the small CPU instance, say ‘c’ and for the medium CPU instance, let’s assume ‘C’. If the web site has to handle to total traffic of T then we determine the number of instances needed in each case. For the

small CPU instance it will be n= (T/c) + 1

and for

the medium CPU instance it will be N =( T/C)+1.

Now we compute the relative costs of the small and medium CPU instances and identify which is more economical. For example if r1 is the cost per hour of the small CPU instance and R1 is the cost of the medium CPU instance we choose

The small CPU instance if r1 *n < R1 *N (per hour)

While on the other hand if R1 *N < r1 *n then we will choose the medium instance.

Hence the determination of which CPU instance and the configuration of the web application on the cloud will depend on appropriate performance tuning and proper load testing on the cloud. Do also ready my other posts on latency namely ‘The Many faces of latency” and “The Anatomy of Latency“.

Also see latency and throughput in action in the following series of posts

– Bend it like Bluemix, MongoDB with autoscaling – Part 1

– Bend it like Bluemix, MongoDB with autoscaling – Part 2

– Bend it like Bluemix, MongoDB with autoscaling – Part 3

Find me on Google+

The Anatomy of Latency

Latency is a measure of the time delay experienced in a system. In data communications, latency would be measured as the round-trip delay between sending a packet and receiving response from the destination. In the world of web applications latency is the response time of a web site. In web applications latency is dependent on both the round trip time on the communication link and also the processing time of the application, Hence we could say that

latency = 2 * round trip time  + Processing time

The round trip time is probably less susceptible to increasing traffic than the processing time taken for handling the increased loads. The processing time of the application is particularly pernicious in that it susceptible to changing traffic. This article tries to analyze why the latency or response times of web applications typically increase with increasing traffic. While the latency increases exponentially as the traffic increases the throughput increases to a point and then finally starts to drop substantially.  The ideal situation for all internet applications is to have the ability to scale horizontally allowing the application to handle increasing traffic by simply adding more commodity servers to the application while maintaining the response times to acceptable limits. However in the real world this never happens.

The price of Latency

Latency hurts business. Amazon found out that every 100 ms of latency cost them 1% of sales.  Similarly Google realized that a 0.5 second increase in search results dropped the search traffic by 20%. Latency really matters.    Reactions to bad response times in web sites range from minor annoyance to complete frustration and loss of users and business.

The cause of processing latency

One of the fundamental requirements of scalable systems is that they should be loosely coupled. The application needs to have a modular architecture with well defined interfaces with the other modules.  Ideally, applications which have been designed with fairly efficient processing times of the order of O(logn) or O(nlogn)  will be immune to changing loads but will be impacted by changes in number of data elements  So the algorithms adopted by the applications themselves do not contribute the increasing response times for increase traffic. So finally what really is the performance bottleneck for increasing latencies and decreasing throughput for increased loads?

Contention- the culprit

One of the culprits behind the deteriorating response is the thread locking and resource contention. Assuming that application has been designed with Reader-Writer locks or message queue based synchronization mechanism then the time spent in waiting for resources to become free, while traffic increases, will result in the degraded performance.

Let us assume that the application is read-heavy, write-light and has implemented Reader-Writer synchronization mechanism. Further let us assume that a write-thread locks a resource for 250 ms.  At low loads we could have 4 such threads each locking the resource for 250 ms for a total span of 1s.  Hence in 1s there can be a maximum of 4 threads each of which has executed a write lock for 250 ms for a total of 1s. In this interval all reader threads will be forced to wait. When the traffic load is low the number of reader threads waiting for the lock to be released will be low and will not have much impact but as the traffic increases the number of threads that are waiting for the lock to be released will be increase. Since a write lock takes a finite amount of time to complete processing we cannot go over the 4 write threads in 1 second with the given CPU speed.

However as the traffic further increases the number of waiting threads not only increases but also consume CPU and memory. Now this adversely impacts the writer threads which find that they have lesser CPU cycles and less memory and hence take longer times to complete. This downward cycle worsens and hence results in an increase in the response time and a worsening throughput in the application.

The solution to this problem is not easy. We need to revisit the areas where the application blocks waiting for something. Locking besides causing threads to wait also adds the overhead of getting scheduled prior to being able to execute again. We need to minimize the time a thread holds a resource before allowing others threads access to it.

Find me on Google+

Designing for Cloud Worthiness

Cloud Computing is changing the rules of computing to the enterprise. Enterprises are no longer constrained by capital costs of upfront equipment purchase. Rather they can concentrate on the application and deploy it on the cloud and pay in a utility style based on usage. Cloud computing essentially presents a virtualized platform on which applications can be deployed.

The Cloud exhibits the property of elasticity by automatically adding more resources to the application as demand grows and shrinking the resources when the demand drops. It is this property of elasticity of the cloud and the ability to pay based on actual usage that makes Cloud Computing so alluring.

However to take full advantage of the Cloud the application must use the available cloud resources judiciously.  It is important for applications that are to be deployed on the cloud to have the property of scaling horizontally.  What this implies is that the application should be able to handle more transactions per second when more resources are added to application. For example if the application has been designed to run in a small CPU instance of 1.7GHz,32 bit and 160 GB of instance storage with a throughput of 800 transactions per second then one should be able to add 4 such instances and scale to handling 4000 transactions per second.

However there is a catch in this. How does one determine what should be theoretical limit of transactions per second for a single instance?  Ideally we should maximize the throughput and minimize the latency for each instance prior to going to the next step of adding more instances on the cloud. One should squeeze the maximum performance from the application in the instance of choice prior to using multiple instances on the cloud. Typical applications perform reasonably well under small loads but as the traffic is increased the response time increases and the throughput also starts dipping.

There is a need to run some profiling tools and remove bottlenecks in the application. The standard refrain for applications to be deployed on the cloud is that they should be loosely coupled and also be stateless. However, most applications tend to be multi-threaded with resource sharing in various modules.  The performance of the application because of locks and semaphores should be given due consideration. Typically a lot of time wasted in the wait state of threads in the application. A suitable technique should be used for providing concurrency among threads. The application should be analyzed whether it read-heavy and write-light or write-heavy and read-light. Suitable synchronization techniques like reader-Writer, message queue based exclusion or monitors should be used.

I have found callgrind for profiling and gathering performance characteristics along with KCachegrind for providing a graphical display of performance times extremely useful.

Another important technique to improve performance is the need to maintain in-memory cache of frequently accessed data. Rather than making frequent queries to the database periodic updates from the database need to be made and stored in in-memory cache. However while this technique works fine with a single instance the question of how to handle in-memory caches for multiple instances in the cloud represents quite a challenge. In the cloud when there are multiple instances there is a need for a distributed cache which is  shared among multiple instances. Memcached is appropriate technique for maintaining a distributed cache in the cloud.

Once the application has been ironed out for maximum performance the application can be deployed on the cloud and stress tested for peak loads.

Some good tools that can be used for generating loads on the application are loadUI and multi-mechanize. Personally I prefer multi-mechanize as it uses test scripts that are based on Python which can be easily modified for the testing. One can simulate browser functionality to some extent with Python in multi-mechanize which can prove useful.

Hence while the cloud provides CPUs, memory and database resources on demand the enterprise needs to design applications such that the use of these resources are done judiciously. Otherwise the enterprise will not be able to reap the benefits of utility computing if it deploys inefficient applications that hog a lot of resources without appropriate revenue generating performance.

INWARDi Technologies