Dissecting the Cloud – Part 2

This post  delves a little more deeply into the cloud. In the last post Dissecting the Cloud –Part 1, I described the analogy of a person partitioning a large house by creating self-contained units through the use of a hypervisor which abstracts the underlying hardware( CPU, storage and NICs) into virtual CPUs, virtual NICs and virtual disks.

Hence there are has several instances on the cloud each with its own CPU, NIC and storage. In fact several tenants can reside on the same cloud with their own individual CPU, NIC and storage. This is known as multi-tenancy.

However multi-tenancy creates a unique set of associated issues similar to that of a multi-tenanted house. For e.g. how does one isolate one tenant from another? How does one charge each tenant? Are the tenants secured from the prying eyes of their neighbors? How can the owner ensure that one  particular tenant does not consume an inordinate amount of water or electricity at the expense of other tenants?

These are typical problems in a multi-tenanted cloud. A common and a high profile issue in the cloud is that of the ‘noisy neighbor’. In this situation one of the instances of the cloud hogs the network bandwidth or the storage tier, resulting in a severe bandwidth crunch or storage access problems for other instances. Here is an interesting article on the noisy neighbor issue “The Problem with noisy neighbors in the cloud”.

It appears that IBM has patented a solution for the bandwidth crunch caused by noisy neighbors: IBM patents ‘noisy neighbor’ problem with SDN.

In order to ensure that multi-tenancy can be realized in the cloud it is essential to isolate the virtual CPUs, network and storage in the cloud

Network isolation: Network isolation is achieved through the use of VPNs (virtual private network), VLANs (Virtual LANS) and subnetting.

A VPN creates a secure tunnel between a user and the cloud instance while accessing the instance from the internet. The data in motion is encrypted using IPSec.  Also vNICs belonging to a client are logically grouped together in a VLAN. Groups of vNICs can be sub-netted together to allow broadcast between then.  VLANs can effectively isolate traffic between itself and other VLANs. A very good write-up of VLANs and sub-netting can be seen at “What is the difference between subnetting and VLAN”.

Storage isolation: Storage in cloud can be made of block storage, SAN or NAS storage. Storage isolation is typically achieved through the hypervisor and zoning. Zoning is the partitioning of a Fibre Channel fabric into smaller subsets to restrict interference, add security, and to simplify management.  While a SAN makes available several devices and/or ports to a single device, each system connected to the SAN should only be allowed access to a controlled subset of these devices/ports.

CPU isolation: The hypervisor does create individual instances all fairly isolated from one another. However this is the area that is receiving more attention than storage or networking isolation because of security concerns and is prone to attack. In fact I was greatly surprised to hear that there is a technique called ‘side channel’ attack by which an intruder by just observing the time that is taken for computations and the temperatures generated can reverse engineer the actual instructions. This is really a scary thought!

This is how multi-tenancy is achieved in clouds. I hope to revisit this topic again in the future.

Find me on Google+

Dissecting the Cloud – Part 1

“The Cloud brings it with it the promise of utility-style computing and the ability to pay according to usage.

Cloud Computing provides elasticity or the ability to grow and shrink based on traffic patterns.

Cloud Computing does away with CAPEX and the need to buy infrastructure upfront and replaces it with OPEX model and so on”.

All this old news and has been repeated many times. But what exactly constitutes cloud computing? What brings about the above features? What are its building blocks of the cloud that enable one to realize the above?

This post tries to look deeper into the innards of the Cloud to determine what the cloud really is.

Before we get to this I would like to dwell on an analogy to understand the Cloud better.

Let us assume, Mr. A owns a large building of about 15,000 sq feet and about 100 feet tall. Let us assume that Mr. A wants to rent this building.

Now, assume that the door of this building opens to single, large room on the inside!

Mr. X comes to rent this building. If this was the case then poor Mr. X would have to pay through his nose, presumably, for the entire building even though his requirement would have been for a small room of about 600 x 600 feet. Imagine the waste of space. Moreover this would also have resulted in an enormous waste of electricity. Imagine the lighting needed. Also an inordinate amount of water would have to be utilized if this single, large room needed to be cleaned. The cost for all of this would have to be borne by Mr. X.

This is clearly not a pleasant state of affairs for either Mr. X or for the owner Mr. A of the building.

The solution to this is easy.  What Mr. A needs to do, is to partition the building into self-contained rooms (600 x 600 sq feet) with all the amenities. Each self-contained unit would need to have its own electricity and water meter.

Now Mr. A can rent rooms to different tenants on their need basis. This is a win-win situation both for Mr, A and Mr. X. The tenants only need to pay for the rooms they occupy and the electricity and water they consume.

This is exactly the principle behind cloud computing and is known as ‘virtualization’

There are 3 computing components that one must consider. CPU, Network and Storage. The below picture shows the virtualization of CPU,RAM, NIC (network card), Disk (storage)

Server-Virtualization-Logical-View

The Cloud is essentially made up of  anywhere between 100 servers to 100,000 servers. The servers are akin to the large building. Running a single OS and application(s) on the entire server is a waste of computing, storage and network resources.

Virtualization abstracts the hardware, storage and network through the use of software known as the ‘hypervisor’. On top of the hypervisor several ‘guest OSes’ can run. Applications can then run on these guest OSes.

Hence over the CPU (single, dual or multi-core) of the server,  multiple guest OS’es  can run each with its own set of applications

This is similar to partitioning the large CPU resource of the server into smaller units.

There are 3 main Virtualization technologies namely VMware, Citrix and MS Hyper-V

Here is a diagram showing the 3 main the virtualization technologies

thumb_server_virtualization_lrg

To be continued …


Find me on Google+

Architecting a cloud based IP Multimedia System (IMS)

Here is an idea of mine that has been slow cooking in my head for more than 1 and a 1/2 year. Finally managed to work its way to IP.com. See link below

Architecting a cloud based IP Multimedia System (IMS) 

The full article is included below

Abstract

This article describes an innovative technique of “cloudifying” the network elements of the IP Multimedia (IMS) framework in order to take advantage of keys benefits of the cloud like elasticity and the utility style pricing. This approach will provide numerous advantages to the Service Provider like better Return-on-Investment(ROI), reduction in capital expenditure and quicker deployment times,  besides offering the end customer benefits like the availability of high speed and imaginative IP multimedia services

Introduction

IP Multimedia Systems (IMS) is the architectural framework proposed by 3GPP body to establish and maintain multimedia sessions using an all IP network. IMS is a grand vision that is access network agnostic, uses an all IP backbone to begin, manage and release multimedia sessions. This is done through network elements called Call Session Control Function (CSCFs), Home Subscriber Systems (HSS) and Application Servers (AS). The CSCFs use SDP over SIP protocol to communicate with other CSCFs and the Application Servers (AS’es). The CSCFs also use DIAMETER to talk to the Home Subscriber System (HSS’es).

Session Initiation Protocol (SIP) is used for signaling between the CSCFs to begin, control and release multi-media sessions and Session Description Protocol (SDP) is used to describe the type of media (voice, video or data). DIAMETER is used by the CSCFs to access the HSS. All these protocols work over IP. The use of an all IP core network for both signaling and transmitting bearer media makes the IMS a very prospective candidate for the cloud system.

This article  proposes a novel technique of “cloudifying” the network elements of the IMS framework (CSCFs) in order to take advantage of the cloud technology for an all IP network. Essentially this idea proposes deploying the CSCFs (P-CSCF, I-CSCF, S-CSCF, BGCF) over a public cloud. The HSS and AS’es can be deployed over a private cloud for security reasons. The above network elements either use SIP/SDP over IP or DIAMETER over IP. Hence these network elements can be deployed as instances on the servers in the cloud with NIC cards. Note: This does not include the Media Gateway Control Function (MGCF) and the Media Gate Way (MGW) as they require SS7 interfaces. Since IP is used between the servers in the cloud the network elements can setup, maintain and release SIP calls over the servers of the cloud. Hence the IMS framework can be effectively “cloudified” by adopting a hybrid solution of public cloud for the CSCF entities and the private cloud for the HSS’es and AS’es.

This idea enables the deployment of IMS and the ability for the Operator, Equipment Manufacturer and the customer to quickly reap the benefits of the IMS vision while minimizing the risk of such a deployment.

Summary

IP Multimedia Systems (IMS) has been in the wings for some time. There have been several deployments by the major equipment manufacturers, but IMS is simply not happening. The vision of IMS is truly grandiose. IMS envisages an all-IP core with several servers known as Call Session Control Function (CSCF) participating to setup, maintain and release of multi-media call sessions. The multi-media sessions can be any combination of voice, data and video.

In the 3GPP Release 5 Architecture IMS draws an architecture of Proxy CSCF (P-CSCF), Serving CSCF(S-CSCF), Interrogating CSCF(I-CSCF), and Breakout CSCF(BGCF), Media Gateway Control Function (MGCF), Home Subscriber Server(HSS) and Application Servers (AS) acting in concert in setting up, maintaining and release media sessions. The main protocols used in IMS are SIP/SDP for managing media sessions which could be voice, data or video and DIAMETER to the HSS.

IMS is also access agnostic and is capable of handling landline or wireless calls over multiple devices from the mobile, laptop, PDA, smartphones or tablet PCs. The application possibilities of IMS are endless from video calling, live multi-player games to video chatting and mobile handoffs of calls from mobile phones to laptop. Despite the numerous possibilities IMS has not made prime time.

The technology has not turned into a money spinner for Operators. One of the reasons may be that Operators are averse to investing enormous amounts into new technology and turning their network upside down.

The IMS framework uses CSCFs which work in concert to setup, manage and release multi media sessions. This is done by using SDP over SIP for signaling and media description. Another very prevalent protocol used in IMS is DIAMETER.  DIAMETER is the protocol that is used for authorizing, authenticating and accounting of subscribers which are maintained in the Home Subscriber System (HSS). All the above protocols namely SDP/SIP and DIAMETER protocols work over IP which makes the entire IMS framework an excellent candidate for deploying on the cloud.

Benefits

There are 6 key benefits that will accrue directly from the above cloud deployment for the IMS. Such a cloud deployment will

i.    Obviate the need for upfront costs for the Operator

ii.    The elasticity and utility style pricing of the cloud will have multiple benefits for the Service Provider and customer

iii.   Provider quicker ROI for the Service Provider by utilizing a innovative business model of revenue-sharing for the Operator and the equipment manufacturer

iv.   Make headway in IP Multimedia Systems

v.   Enable users of the IMS to avail of high speed and imaginative new services combining voice, data, video and mobility.

vi.   The Service Provider can start with a small deployment and grow as the subscriber base and traffic grows in his network

Also, a cloud deployment of the IMS solution has multiple advantages to all the parties involved namely

a)   The Equipment manufacturer

b)   The Service Provider

c)   The customer

A cloud deployment of IMS will serve to break the inertia that Operators have for deploying new architectures in the network.

a)   The Equipment manufactures for e.g. the telecommunication organizations that create the software for the CSCFs can license the applications to the Operators based on innovative business model of revenue sharing with the Operator based on usage

b)   The Service Provider or the Operator does away with the Capital Expenditure (CAPEX) involved in buying CSCFs along with the hardware.  The cost savings can be passed on to the consumers whose video, data or voice calls will be cheaper. Besides, the absence of CAPEX will provide better margins to the operator. A cloud based IMS will also greatly reduce the complexity of dimensioning a core network. Inaccurate dimensioning can result in either over-provisioning or under-provisioning of the network.  Utilizing a cloud for deploying the CSCFs, HSS and AS can obviate the need upfront infrastructure expenses for the Operator. As mentioned above the Service Provider can pay the equipment manufactured based on the number of calls or traffic through the system

c)   Lastly the customer stands to gain as the IMS vision truly allows for high speed multimedia sessions with complex interactions like multi-party video conferencing, handoffs from mobile to laptop or vice versa. Besides IMS also allows for whiteboarding and multi-player gaming sessions.

Also the elasticity of the cloud can be taken advantage of by the Operator who can start small and automatically scale as the user base grows.

Description

This article describes a method in which the Call Session Control Function (CSCFs) namely the P-CSCF, S-CSCF,I-CSCF and BGCF can be deployed on a public cloud.  This is possible because there are no security risks associated with deploying the CSCFs on the public cloud. Moreover the elasticity and the pay per use of the public cloud are excellent attributes for such a cloudifying process. Similarly the HSS’es and AS’es can be deployed on a private cloud.  This is required because the HSS and the AS do have security considerations as they hold important subscriber data like the IMS Public User Identity (IMPU) and the IMS Private User Identity (IMPI).  However, the Media Gateway Control Function (MGCF) and Media Gateway (MGW) are not included this architecture as these 2 elements require SS7 interfaces

Using the cloud for deployment can bring in the benefits of zero upfront costs, utility style charging based on usage and the ability to grow or shrink elastically as the call traffic expands or shrinks.

This is shown diagrammatically below where all the IMS network elements are deployed on a cloud.

In Fig 1., all the network elements are shown as being part of a cloud.

1

Fig 1. Cloudifying the IMS architecture.

Detailed description

This idea requires that the IMS solution be “cloudified “i.e. the P-CSCF, I-CSCF, S-CSCF and the BGCF should be deployed on a public cloud. These CSCFs are used to setup, manage and release calls and the information that is used for the call does not pose any security risk. These network elements use SIP for signaling and SDP over SIP for describing the media sessions. The media sessions can be voice, video or data.

However the HSS and AS which contain the Public User Identity (IMPU) and Private User Identity (IMPI)  and other important data  can be deployed in a private cloud. Hence the IMS solution needs a hybrid solution that uses both the public and private cloud. Besides the proxy SIP servers, Registrars and redirect SIP servers also can be deployed on the public cloud.

The figure Fig 2. below shows how a hybrid cloud solution can be employed for deploying the IMS framework

2

Fig 2: Utilizing a hybrid cloud solution for deploying the IMS architecture

The call from a user typically originated from a SIP phone and will initially reach the P-CSCF. After passing through several SIP servers it will reach a I-CSCF. The I-CSCF will use DIAMETER to query the HSS for the correct S-CSCF to handle the call. Once the S-CSCF is identified the I-CSCF then signals the S-CSCF to reach a terminating a P-CSCF and finally the end user on his SIP phone.  Since the call uses SDP over SIP we can imagine that the call is handled by P-CSCF, I-CSCF, S-CSCF and BGCF instances on the cloud. Each of the CSCFs will have the necessary stacks for communicating to the next CSCF. The CSCF typically use SIP/SDP over TCP or UDP and finally over IP. Moreover query from the I-CSCF or S-CSCF to the HSS will use DIAMTER over UDP/IP.  Since IP is the prevalent technology between servers in the cloud communication between CSCFs is possible.

Methodology

The Call Session Control Functions (CSCFs P-CSCF, I-CSCF, S-CSCF, BGCF) typically handle the setup, maintenance and release of SIP sessions. These CSCFs use either SIP/SDP to communicate to other CSCFs, AS’es or SIP proxies or they use DIAMETER to talk to the HSS. SIP/SDP is used over either the TCP or the UDP protocol.

We can view each of the CSCF, HSS or AS as an application capable of managing SIP or DIAMETER sessions. For this these CSCFs need to maintain different protocol stacks towards other network elements. Since these CSCFs are primarily applications which communicate over IP using protocols over it, it makes eminent sense for deploying these CSCFs over the cloud.

The public cloud contains servers in which instances of applications can run in virtual machines (VMs). These instances can communicate to other instances on other servers using IP. In essence the entire IMS framework can be viewed as CSCF instances which communicate to other CSCF instances, HSS or AS over IP. Hence to setup, maintain and release SIP sessions we can view that instances of P-CSCF, I-CSCF, S-CSCF and B-CSCF executed as separate instances on the servers of a public cloud and communicated using the protocol stacks required for the next network element. The protocol stacks for the different network elements is shown below

The CSCF’s namely the P-CSCF, I-CSCF, S-CSCF & the BGCF all have protocol interfaces that use IP. The detailed protocol stacks for each of these network elements are shown below. Since they communicate over IP the servers need to support 100 Base T Network Interface Cards (NIC) and can typically use RJ-45 connector cables, Hence it is obvious that high performance servers which have 100 Base T NIC cards can be used for hosting the instances of the CSCFs (P-CSCF, I-CSCF, S-CSCF and BGCF). Similarly the private cloud can host the HSS which uses DIAMETER/TCP-SCTP/IP and AS uses SDP/SIP/UDP/IP. Hence these can be deployed on the private cloud.

Network Elements on the Public Cloud

The following network elements will be on the public cloud

a) P-CSCF b) I-CSCF c) S-CSCF d) BGCF

The interfaces of each of the above CSCFs are shown below

a)   Proxy CSCF (P-CSCF) interface

 p

 

As can be seen from above all the interfaces (Gm, Gq, Go and Mw) of the P-CSCF are either UDP/IP or SCTP/TCP/IP.

b)   Interrogating CSCF(I- CSCF) interface

 i

 

As can be seen from above all the interfaces (Cx, Mm and Mw) of the I-CSCF are either UDP/IP or SCTP/TCP/IP.

c)   Serving CSCF (S-CSCF) interfaces

The interfaces of the S-CSCF (Mw, Mg, Mi, Mm, ISC and Cx) are all either UDP/IP or SCTP/TCP/IP

s

d)   Breakout CSCF (BGCF) interface

The interfaces of the BGCF (Mi, Mj, Mk) are all UDP/IP.

bg

Network elements on the private cloud

The following network elements will be on the private cloud

a)   HSS b) AS

a)   Home Subscriber Service (HSS) interface

The HSS interface (Cx) is DIAMETER/SCTP/TCP over IP.

h

b)   Application Server (AS) Interface

a 

The AS interface ISC is SDP/SIP/UDP over IP.

As can be seen the interfaces the different network elements have towards other elements are over either UDP/IP or TCP/IP.

Hence we can readily see that a cloud deployment of the IMS framework is feasible.

Conclusion

claimtoken-515bdc0894cdb

Thus it can be seen that a cloud based IMS deployment is feasible given the IP interface of the CSCFs, HSS and AS. Key features of the cloud like elasticity and utility style charging will be make the service attractive to the Service providers. A cloud based IMS deployment is truly a great combination for all parties involved namely the subscriber, the Operator and the equipment manufactures. A cloud based deployment will allow the Operator to start with a small customer base and grow as the service becomes popular. Besides the irresistibility of IMS’ high speed data and video applications are bound to capture the subscribers imagination while proving a lot cheaper.

Also see my post on “Envisioning a Software Defined Ip Multimedia System (SD-IMS)

Find me on Google+

The Next Frontier

Published in Telecom Asia – The next frontier, 21, Mar, 2012

In his classic book “The Innovator’s Dilemma” Prof. Clayton Christensen of Harvard Business School presents several compelling cases of great organizations that fail because they did not address disruptive technologies, occurring in the periphery, with the unique mindset required in managing these disruptions.

In the book the author claims that when these disruptive technologies appeared on the horizon there were few takers for these technologies because there were no immediate applications for them. For e.g. when the hydraulic excavator appeared its performance was inferior to the existing predominant manual excavator. But in course of time the technology behind hydraulic excavators improved significantly to displace existing technologies. Similarly the appearance of 3.5 inch disk had no immediate takers in desktop computers but made its way to the laptop.

Similarly the mini computer giant Digital Equipment Corporation (DEC) ignored the advent of the PC era and focused all its attention on making more powerful mini-computers. This led to the ultimate demise of DEC and several other organizations in this space. This book includes several such examples of organizations that went defunct because disruptive technologies ended up cannibalizing established technologies.

In the last couple of months we have seen technology trends pouring in.  It is now accepted that cloud computing, mobile broadband, social networks, big data, LTE, Smart Grids, and Internet of Things will be key players in the world of our future. We are now at a point in time when serious disruption is not just possible but seems extremely likely. The IT Market Research firm IDC in its Directions 2012 believes that we are in the cusp of a Third Platform that will dominate the IT landscape.

There are several technologies that have been appearing on the periphery and have only gleaned marginal interest for e.g. Super Wi-Fi or Whitespaces which uses unlicensed spectrum to access larger distances of up to 100 kms. Whitespaces has been trialed by a few companies in the last year. Another interesting technology is WiMAX which provides speeds of 40 Mbps for distances of up to 50 km. WiMAX’s deployment has been spotty and has not led to widespread adoption in comparison to its apparent competitor LTE.

In the light of the technology entrants, the disruption in the near future may occur because of a paradigm shift which I would like to refer as the “Neighborhood Area Computing (NAC)” paradigm.  It appears that technology will veer towards neighborhood computing given the bandwidth congestion issues of WAN. A neighborhood area network (NAN) will supplant the WAN for networks which address a community in a smaller geographical area

This will lead to three main trends

Neighborhood Area Networks (NAN):  Major improvements in Neighborhood Area Networks (NAN) are inevitable given the rising importance of smart grids and M2M technology in the context of WAN latencies. Residential homes of the future will have a Home Area Network (HAN) based on bluetooth or Zigbee protocols connecting all electrical appliances. In a smart grid contextNAN provides the connectivity between the Home Area Network (HAN) of a future Smart Home with the WAN network. While it is possible that the utility HAN network will be separate from the IP access network of the residential subscriber, the more likely possibility is that the HAN will be a subnet within the home network and will connect toNAN network.

The data generated from smart grids, m2m networks and mobile broadband will need to be stored and processed immediately through big data analytics on a neighborhood datacenter. Shorter range technologies like WiMAX, Super WiFi/ Whitespaces will transport the data to a neighborhood cloud on which a Hadoop based Big Data analytics will provide real time analytics

Death of the Personal Computer:  The PC/laptop will soon give way to a cloud based computing platform similar to Google’s Chrome book. Not only will we store all our data on the cloud (music, photos, videos) we will also use the cloud for our daily computing needs. Given the high speeds of theNAN this should be quite feasible in the future. The cloud will remove our worries about virus attacks, patch updates and the need to buy new software.  We will also begin to trust our data in the cloud as we progress to the future. Moreover the pay-per-use will be very attractive to consumers.

Exploding Datacenters:  As mentioned above a serious drawback of the cloud is the WAN latency. It is quite likely that with the increases in processing powers and storage capacity coupled with dropping prices that cloud providers will have hundreds of data centers with around 1000 servers for each city rather than a few mega data centers with 10,000’s of servers.  These data centers will address the computing needs of a community in a small geographical area. Such smaller data centers, typically in a small city, will solve 2 problems. One it will build into the cloud geographical redundancy besides also providing excellent performance asNAN latencies will be significantly less in comparison to WAN latencies.

These technologies will improve significantly and fill in the need for handling neighborhood high speed data

The future definitely points to computing in the neighborhood.

Find me on Google+

The emergence of Social Software as a Service (SSaaS)

Published in Telecom Asia-17 Feb 2012 as – The dawn of Social Software as a Service

We are in the midst of a Social Networking revolution as we progress to the next decade. As technology becomes more complex in a flatter world, cooperating and collaborating will not only be necessary but also essentially imperative. McKinsey in its recent report “Wiring the Open Source Enterprise” talks of the future of a “networked enterprise” which will require the enterprise to integrate Web 2.0 technologies into its enterprise computing fabric.

Another McKinsey report  “The rise of the networked enterprise: Web 2.0 finds its payday” states “that Web 2.0 payday could be arriving faster than expected”. It goes on to add that “a new class of company is emerging—one that uses collaborative Web 2.0 technologies intensively to connect the internal efforts of employees and to extend the organization’s reach to customers, partners, and suppliers”

Social Software utilizing Web 2.0 technologies will soon become the new reality if organizations want to stay agile. Social Software includes those technologies that enable the enterprise to collaborate through blogs, wikis, podcasts, and communities. A collaborative environment will unleash greater fusion of ideas and can trigger enormous creative processes in the organization.

According to Prof. Clay Shirky ofNew YorkUniversitythe underused human potential at companies represents an immense “cognitive surplus” which can be tapped by participatory tools such as Social Software.

A fully operational social network in the organization will enable quicker decision making, trigger creative collaboration and bring together a faster ROI for the enterprise. A shared knowledge pool enables easier access to key information from across the enterprise and facilitates faster decision making.

Enterprise Social Software enables to access a shared knowledge pool across the organization. Employees can share ideas, seek out expert opinion and arrive at solutions much faster.  Social collaboration tools can truly unleash a profusion of creative ideas and thought across the organization and enable better problem solving abilities.

Clearly the social network paradigm is new concept which needs to be adopted by any organization which wants a greater marker share and a faster time to market. In today’s knowledge intensive world the need for an enterprise strategy that is focused on enabling collaboration through the use of Web 2.0 becomes obvious.

However enterprises which would like to embrace Social Technologies would face the twin challenges of i) developing the application and ii) secondly deploying it on their own data center.

Enterprises would be faced with the typical “build-vs.-buy” quandary. Organizations that want to benefit quickly from Web 2.0 technologies would prefer a buy rather than a build option.

Besides, the deployment of a Social Computing platform would require the commissioning of large data centers to allow for simultaneous access by the platform users. But the attendant problems of maintaining a large data center can be very intimidating.  The top 3 challenges of large data centers typically center around the

a)      The problems of data growth
b)      The challenges of performance and scalability
c)      And the sticky issue of network congestion and connectivity

It is against this backdrop of relevance of Social Software vis-à-vis the enterprises’ need for collaboration tools that Social Software as a Service (SSaaS) makes eminent sense.

If SSaaS could be provided as a service to enterprises with the option of either deploying it on a public or a private cloud it would make the service very attractive.

Enterprises would not have to go through the software development lifecycle of developing the social collaboration tools besides also saving them the upfront capital expenditures of creating the associated data centers. In addition the enterprise would also not have to face the technical challenges of maintaining the data centers.

Enterprises could either license the SSaaS tools only for the organization’s internal use among its employees or it could open it to its employees, suppliers and partners enabling a greater collaboration of ideas and thoughts.

The SSaaS & cloud service provider would charge the enterprise on a pay-per-use policy based on the number of users of its compute, storage and its network.

An SSaaS service would be a win-win for both the service provider and also the enterprise which can tap the creative potential of its employees.

Social Software as a Service (SSaaS) will be extremely attractive as we move to a flatter and a more knowledge intensive world.

Find me on Google+

Technological hurdles: 2012 and beyond

Published in Telecom Asia, Jan 11,2012 – Technological hurdles – 2012 and beyond

You must have heard it all by now – the technological trends for 2012 and the future. The predictions range over BigData, cloud computing, internet of things, LTE, semantic web, social commerce and so on.

In this post, I thought I should focus on what seems to be significant hurdles as we advance to the future. So for a change, I wanted to play the doomsayer rather than a soothsayer. The positive trends are bound to continue and in our exuberance we may lose sight of the hurdles before us. Besides, “problems are usually opportunities in disguise”. So here is my list of the top issues that is facing the industry now.

Bandwidth shortage: A key issue of the computing infrastructure of today is data affinity, which is the result of the dual issues of data latency and the economics of data transfer. Jim Gray (Turing award in 1998) whose paper on “Distributed Computing Economics” states that that programs need to be migrated to the data on which they operate rather than transferring large amounts of data to the programs. In this paper Jim Gray tells us that the economics of today’s computing depends on four factors namely computation, networking, database storage and database access. He then equates $1 as follows

One dollar equates to

= 1 $

≈ 1 GB sent over the WAN

≈ 10 Tops (tera cpu operations)

≈ 8 hours of cpu time

≈ 1 GB disk space

≈ 10 M database accesses

≈ 10 TB of disk bandwidth

≈ 10 TB of LAN bandwidth

As can be seen from above breakup, there is a disproportionate contribution by the WAN bandwidth in comparison to the others.  In others words while the processing power of CPUs and the storage capacities have multiplied accompanied by dropping prices, the cost of bandwidth has been high. Moreover the available bandwidth is insufficient to handle the explosion of data traffic.

In fact it has been found that  the “cheapest and fastest way to move a Terabyte cross country is sneakernet (i.e. the transfer of electronic information, especially computer files, by physically carrying removable media such as magnetic tape, compact discs, DVDs, USB flash drives, or external drives from one computer to another).

With the burgeoning of bandwidth hungry applications it is obvious that we are going to face a bandwidth shortage. The industry will have to come with innovative solutions to provide what I would like to refer as “bandwidth-on-demand”.

The Spectrum Crunch: Powerful smartphones, extremely fast networks, content-rich applications, and increasing user awareness, have together resulted in a virtual explosion of mobile broadband data usage. There are 2 key drivers behind this phenomenal growth in mobile data. One is the explosion of devices-smartphones, tablet PCs, e-readers, laptops with wireless access. The second is video. Over 30% of overall mobile data traffic is video streaming, which is extremely bandwidth hungry. All these devices deliver high-speed content and web browsing on the move. The second is video. Over 30% of overall mobile data traffic is video streaming, which is extremely bandwidth hungry. The rest of the traffic is web browsing, file downloads, and email

The growth in mobile data traffic has been exponential. According to a report by Ericsson, mobile data is expected to double annually till 2015. Mobile broadband will see a billion subscribers this year (2011), and possibly touch 5 billion by 2015.

In an IDATE (a consulting firm) report,  the total mobile data will exceed 127 exabytes (an exabyte is 1018 bytes, or 1 mn terabytes) by 2020, an increase of over 33% from 2010).

Given the current usage trends, coupled with the theoretical limits of available spectrum, the world will run out of available spectrum for the growing army of mobile users. The current spectrum availability cannot support the surge in mobile data traffic indefinitely, and demand for wireless capacity will outstrip spectrum availability by the middle of this decade or by 2014.

This is a really serious problem. In fact, it is a serious enough issue to have the White House raise a memo titled “Unleashing the Wireless Broadband Revolution”. Now the US Federal Communication Commission (FCC) has taken the step to meet the demand by letting wireless users access content via unused airwaves on the broadcast spectrum known as “White Spaces”. Google and Microsoft are already working on this technology which will allow laptops, smartphones and other wireless devices to transfer in GB instead of MB thro Wi-Fi.

But spectrum shortage is an immediate problem that needs to be addressed immediately.

IPv4 exhaustion: IPv4 address space exhaustion has been around for quite some time and warrants serious attention in the not too distant future.  This problem may be even more serious than the Y2K problem. The issue is that IPv4 can address only 2^32 or 4.3 billion devices. Already the pool has been exhausted because of new technologies like IMS which uses an all IP Core and the Internet of things with more devices, sensors connected to the internet – each identified by an IP address. The solution to this problem has been addressed long back and requires that the Internet adopt IPv6 addressing scheme. IPv6 uses 128-bit long address and allows 3.4 x 1038 or 340 trillion, trillion, trillion unique addresses. However the conversion to IPv6 is not happening at the required pace and pretty soon will have to be adopted on war footing. It is clear that while the transition takes place, both IPv4 and IPv6 will co-exist so there will be an additional requirement of devices on the internet to be able to convert from one to another.

We are bound to run into a wall if organizations and enterprises do not upgrade their devices to be able to handle IPv6.

Conclusion: These are some of the technological hurdles that confront the computing industry.  Given mankind’s ability to come up with innovative solutions we may find new industries being spawned in solving these bottlenecks.

Find me on Google+

The story of virtualization

The journey from the early days of batch processing to these days of virtualized computing has been truly an exciting march of progress. The innovations and ideas have truly transformed the computing landscape as we know it which promises of still more breathtaking changes to come.

Batch processing: Programs written on the computers of those days used punch cards also known as Hollerith cards. A separate terminal would be used to edit and create the program which would result in a stack of punched card. The different stacks of user programs would be loaded into a card reader which would then queue the programs for processing by the computers of those days. Each program would be executed in sequential order.

Imagine if our days were structured sequentially where would need a particular task to complete fully before we start another one. That would be a true waste of time. While each task progresses we could focus on other tasks.

The inefficiencies of batch processing soon became obvious and led to the development of multi-tasked systems in which each user’s applications is granted a slice of the CPU cycles for use. The Operating System (OS) would cycle through the list of processes granting then a specific number of cycles to compute each time. Soon this development led to different operating systems including Windows, Unix, Linux and so on.

Multitasking: Mutitasking evolved because designers realized that the Central Processor Unit (CPU) cycles were wasted when programs waited for input/output to arrive or complete. Hence the computer’s operating system(OS) or the central nervous system would swap the user’s program out of the CPU and grant the CPU to other user applications. This way the CPU is utilized efficiently.

The pen analogy : For this analogy let us consider a fountain pen to be the CPU. While Joe is writing a document, he uses the fountain pen. Now, lets assume that Joe needs to print a document. While Joe saunters to pick up his printout, the fountain pen is given to Kartik who needs his tax report. Kartik soon gets tired and takes a coffee break. Now the pen is given to Jane who needs to fill up a form. When Jane completes her form the pen is handed over to Joe who just returned with his print out. The pen (CPU) is thus used efficiently among the many users.

While multi-tasking was a major breakthrough it did lead to an organization’s applications being developed in different OS flavors. Hence a large organization would be left with software silos each with its own unique OS. This was a problem when the organization wanted to consolidate all its relevant software under a common umbrella. For e.g. A telecom operator may have payroll applications that run on Windows, accounting on Linux and human resources on Unix. It thus became difficult for the organization to get a holistic view of what happened in the Finance department as a whole. Enter ‘virtualization’. Virtualization enables applications created for different OS’es to run over a layer known as the “hypervisor” that abstracts the raw hardware.

Virtualization: Virtualization in essence abstracts the raw hardware through a software application called the Hypervisor. The Hypervisor runs on a bare metal of the CPU. Applications that run over the Hypervisor can choose the operating systems of their choice namely Windows, Linux, Unix etc. The Hypervisor would effectively translate the different OS instructions to the machine instructions of the underlying processor

The car analogy: Imagine that you got into a car. Once inside the car you had a button which when pressed would convert the car either into a roaring Ferrari, Lamborghini or a smooth Mercedes, BMW. The dashboard, the seats, engine all magically transformed into the car of your dreams. This is exactly what virtualization tries to achieve.

Server Pooling: However, virtualization went further than just enabling applications created on different OS to run on a single server loaded with the hypervisor. Virtualization also enabled consolidation of server farms. Virtualization brings together the different elements of an enterprise namely the servers each with its memory, processors and different storage options (disk attached storage (DAS), fiber channel storage access network (FC SAN), Network Access Storage (NAS)) and networking elements. Virtualization consolidates the compute, storage and networking elements together and provides an illusion where appropriate compute, storage and network are provided to applications on demand. The applications are provided with virtual machines with the necessary computing, storage and network units as required. Virtualization also took care of providing high availability(HA), mobility and security to the applications besides enabling an illusion of shared resources. Besides if the any of the servers on which an application is executing goes down for any reason the application is migrated seamlessly to another server.

The train analogy: Assume that there was train with ‘n’ number of wagons. Commuters can get on and get off at any station. When they get on the train they are automatically allocated a seat, a berth and so on. The train keeps track of how occupied the train is and provides the appropriate seating dynamically. If the wheels of any wagon gets stuck the passenger is lifted and shifted,seamlessly, to another wagon while the stuck wagon is safely de-linked from the train.

Virtualization has many applications. It is the dominant technology that is used in the creation of public, private or a hybrid cloud thus creating providing an on-demand scalable computing environment. Virtualization is also used in consolidation of server farms enabling optimum usage of the servers.

Find me on Google+

Technologies to watch: 2012 and beyond

Published in Telecom Asia – Technologies to watch:2012 and beyond

Published in Telecoms Europe – Hot technologies for 2012 and beyond

A keen observer of the technological firmament, today, will observe a grand spectacle of diverse technological events. Some technological trends will blaze a trail and will become trend setters while others will vanish without a trace. The factors that make certain technologies to endure in comparison to others could be many, ranging from pure necessity to a coolness factor, from innovativeness to a cost factor.  This article looks at some of the technologies that are certain to be trail blazers in the years to come

Software Defined Networks (SDNs):  Software Defined Networks (SDNs) are based on the path breaking paradigm of separating the control of a network flow from the actual flow of data. SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate. Software Defined Networks (SDN) decouples the routing and switching of the data flows and moves the control of the flow to a separate network element namely, the Flow controller.   The motivation for this is that the flow of data packets through the network can be controlled in a programmatic manner. The OpenFlow Protocol has 3 components to it. The Flow Controller that controls the flows, the OpenFlow switch and the Flow Table and a secure connection between the Flow Controller and the OpenFlow switch. Software Define Networks (SDNs) also include the ability to virtualize the network resources. Virtualized network resources are known as a “network slice”. A slice can span several network elements including the network backbone, routers and hosts. The ability to control multiple traffic flows programmatically provides enormous flexibility and power in the hands of users.  SDNs are bound to be the networks elements of the future.

Smart Grids: The energy industry is delicately poised for a complete transformation with the evolution of the smart grid concept. There is now an imminent need for an increased efficiency in power generation, transmission and distribution coupled with a reduction of energy losses. In this context many leading players in the energy industry are coming up with a connected end-to-end digital grid to smartly manage energy transmission and distribution.  The digital grid will have smart meters, sensors and other devices distributed throughout the grid capable of sensing, collecting, analyzing and distributing the data to devices that can take action on them. The huge volume of collected data will be sent to intelligent device which will use the wireless 3G networks to transmit the data.  Appropriate action like alternate routing and optimal energy distribution would then happen. Smart Grids are a certainty given that this technology addresses the dire need of efficient energy management. Smart Grids besides managing energy efficiently also save costs by preventing inefficiency and energy losses.

The NoSQL Paradigm: In large web applications where performance and scalability are key concerns a non –relational database like NoSQL is a better choice to the more traditional relational databases. There are several examples of such databases – the more reputed are Google’s BigTable,   HBase, Amazon’s Dynamo, CouchDB  & MongoDB. These databases partition the data horizontally and distribute it among many regular commodity servers.  Accesses to the data are based on get(key) or set(key, value) type of APIs. Accesses to the data are based on a consistent hashing scheme for example the Distributed Hash Table (DHT) method. The ability to distribute data and the queries to one of several servers provides the key benefit of scalability. Clearly having a single database handling an enormous amount of transactions will result in performance degradation as the number of transaction increases. Applications that have to frequently access and manage petabytes of data will clearly have to move to the NoSQL paradigm of databases.

Near Field Communications (NFC): Near Field Communications (NFC) is a technology whose time has come. Mobile phones enabled with NFC technology can be used for a variety of purposes. One such purpose is integrating credit card functionality into mobile phones using NFC. Already the major players in mobile are integrating NFC into their newer versions of mobile phones including Apple’s iPhone, Google’s Android, and Nokia. We will never again have to carry in our wallets with a stack of credit cards. Our mobile phone will double up as a Visa, MasterCard, etc. NFC also allows retail stores to send promotional coupons to subscribers who are in the vicinity of the shopping mall. Posters or trailers of movies running in a theatre can be sent as multi-media clips when travelling near a movie hall. NFC also allows retail stores to send promotional coupons to subscribers who are  in the vicinity of the shopping mall besides allowing exchanging contact lists with friends when they are close proximity.

The Other Suspects: Besides the above we have other usual suspects

Long Term Evolution (LTE): LTE enables is latest wireless technology that enables wireless access speeds of up to 56 Mbps. With the burgeoning interest in tablets, smartphones with the countless apps LTE will be used heavily as we move along. For a vision of where telecom is headed, do read my post ‘The Future of Telecom“.

Cloud Computing: Cloud Computing is the other technology that is bound to gain momentum in the years ahead. Besides obviating the need for upfront capital expenditure the cloud enables quick and easy deployment of applications. Moreover the elasticity of the cloud will make it irresistible to large enterprises and corporations.

The above is a list of technologies to watch as create new paths and blaze new trails. All these technologies are bound to transform the world as we know it and make our lives easier, better and more comfortable. These are the technologies that we need to focus on as we move bravely into our future. Do read my post for the year 2011 “Technology Trends – 2011 and beyond

Find me on Google+

Profiting from a cloud deployment

Cloud computing does offer enterprises and organizations a mixed bag of goodies. For one it provides for a utility style computing, the ability to grow and shrink with changing loads, zero upfront costs etc. The benefits of cloud computing are many but does it all add up to profit for an enterprise? That is the critical question that needs to be answered.

This post will take a look on what it takes for a cloud deployment to be profitable for an organization.

The critical parameters for any web application are latency and throughput.  A well designed web application whether it is an e-retail site or an ad serving application will try to minimize the latency or response time while at the same time maximizing the throughput of the application. For any application while the latency can be kept within specified limits the throughput will tend to plateau at a certain level and will not increase with increasing traffic. Utilizing a larger instance can improve the throughput plateau slightly. In any case the reality is that throughput tends to flatten as the traffic is increased.

A typical cloud application will be made of several compute instances, database instances, DNS services etc. Cloud usage is billed by the hour. Hence we can represent the cost of a cloud deployment as follows

Cost (cloud deployment) = m * compute instance + n * database instance + o * network bytes + P

Where P = cost of DNS + Elastic IPs + other costs.

This can be represented by the formula

C = a * D * t

where C = cost of cloud deployment

D = costs per hour of the deployment

and ‘a’ is some arbitrary constant and ‘t’ is the time

Let us assume that for the cloud deployment we get a throughput of T.

The revenue for a web application whether it is an e-commerce site, an e-ticketing site or an ad serving engine will all depend on the throughput i.e. larger the throughput, larger the revenue and hence profit. We can then say that ‘R’ the revenue is

R (revenue) α k * T * t

In others words  the revenue is proportional to the throughput.

Hence to determine the profitability of a particular cloud deployment we need to compare the cost of the deployment for a given throughput versus a projected  profit margin. As long the cost of the deployment is less than the revenue arising from the throughput, the deployment will be profitable.  This can be represented pictorially as below.

The graph clearly shows that for a profitable deployment

d/dt (k * T *t) > d/dt (a * D * t) or

k * T > a * D

Hence as can seen from the picture as long as the slope of the cumulative deployment costs are less that the slope of the revenue the deployment will be profitable.

Find me on Google+

The Case for a Cloud Based IMS Solution

IP Multimedia Systems (IMS) has been in the wings for some time. There have been several deployments by the major equipment manufacturers, but IMS is simply not happening. The vision of IMS is truly grandiose. IMS envisages an all-IP core with several servers known as Call Session Control Function (CSCF) participating to setup, maintain and release call sessions.

In the 3GPP Release 5 Architecture IMS draws an architecture of Proxy CSCF (P-CSCF), Serving CSCF(S-CSCF), Interrogating CSCF(I-CSCF), Breakout CSCF(B-CSCF), Home Subscriber Server(HSS) and Application Servers (AS) acting in concert in setting up, maintaining and release media sessions. The main protocols used in IMS are SIP/SDP for managing media sessions which could be voice, data or video and DIAMETER for connecting to the HSS and the Application Servers.

IMS is also access agnostic and is capable of handling landline or wireless calls over multiple devices from the mobile, laptop, PDA, smartphones or tablet PCs. The application possibilities of IMS are endless from video calling, live multi-player games to video chatting and mobile handoffs of calls from mobile phones to laptop. Despite the numerous possibilities IMS has not made prime time. While IMS technology paints a grand picture it has somehow not caught on. IMS as a technology, holds a lot of promise but has remained just that – promising technology.

The technology has not made the inroads into people’s imaginations or turned into a money spinner for Operators. One of the reasons may be that Operators are averse to investing enormous amounts into new technology and turning their network upside down.

This article provides an innovative approach to introducing IMS in the network by taking advantage of the public cloud!

Since IMS is an all-IP network and the protocol between the CSCF servers is SIP/SDP over TCP IP it can be readily seen that IMS is a prime candidate for the public cloud. An IMS architecture that has to be deployed on the cloud would have several instances of P-CSCFs, S-CSCFs, B-CSCFs, HSS and ASes all sitting on the cloud. An architectural diagram is shown below.

Deploying the CSCFs on the public cloud has multiple benefits. For one it a cloud deployment will eliminate the upfront CAPEX costs for the Operator. The cost savings can be passed on to the consumers whose video, data or voice calls will be cheaper. Besides, the absence of CAPEX will provide better margins to the operator. Lower costs to the consumer and better margins for the Operator is truly an unbeatable combination.

Also the elasticity of the cloud can be taken advantage of by the operator who can start small and automatically scale as the user base grows.

Thus a cloud based IMS deployment is truly a great combination both for the subscriber, the operator and the equipment manufactures. The cloud’s elasticity will automatically provide for growth as the irresistibility of  IMSes high speed video applications catches public imagination.

If IMS as a technology needs to become common place then Operators should plan on deploying their IMS on the public cloud and reap the manifold benefits.

Please see my post for a more detailed view of the above post in “Architecting a cloud based IP Multimedia System (IMS)

A related post of relevance is “Adding the OpenFlow variable to the IMS equation“.

Find me on Google+