Envisioning a Software Defined IP Multimedia System (SD-IMS)

pot

In my earlier post “Architecting a cloud based IP Multimedia System (IMS)” I had suggested the idea of “cloudifying” the network elements of the IP Multimedia Systems. This would bring multiple benefits to the Service Providers as it would enable quicker deployment of the network elements of the IMS framework, faster ROI and reduction in CAPEX. Besides, the CSPs can take advantage of the elasticity and utility style pricing of the cloud.

This post takes this idea a logical step forward and proposes a Software Defined IP Multimedia System (SD-IMS).

In today’s world of scorching technological pace, static configurations for IT infrastructure, network bandwidth and QOS, and fixed storage volumes will no longer be sufficient.

We are in the age of being able to define requirements dynamically through software. This is the new paradigm in today’s world. Hence we have Software Defined Compute, Software Defined Network, Software Defined Storage and also Software Defined Radio.

This post will demonstrate the need for architecting an IP Multimedia System that uses all the above methodologies to further enable CSPs & Operators to get better returns faster without the headaches of earlier static networks.

IP Multimedia Systems (IMS) is the architectural framework proposed by 3GPP body to establish and maintain multimedia sessions using an all IP network. IMS is a grand vision that is access network agnostic, uses an all IP backbone to begin, manage and release multimedia sessions.

The problem:

Any core network has the problem of dimensioning the various network elements. There is always a fear of either under dimensioning the network and causing failed calls or in over dimensioning resulting in wasted excess capacity.

The IMS was created to handle voice, data and video calls. In addition in the IMS, the SIP User Endpoints can negotiate the media parameters and either move up from voice to video or down from video to voice by adding different encoders.  This requires that the key parameters of the pipe be changed dynamically to handle different QOS, bandwidth requirements dynamically.

The solution

The approach suggested in this post to have a Software Defined IP Multimedia System (SD-IMS) as follows.

In other words the compute instances, network, storage and the frequency need to be managed through software based on the demand.

Software Defined Compute (SDC): The traffic in a Core network can be seasonal, bursty and bandwidth intensive. To be able to handle this changing demands it is necessary that the CSCF instances (P-CSCF, S-CSCF,I-CSCF etc) all scale up or down. This can be done through Software Defined Compute or the process of auto scaling the CSCF instances. The CSCF compute instances will be created or destroyed depending on the traffic traversing the switch.

Software Defined Network (SDN): The IMS envisages the ability to transport voice, data and video besides allowing for media sessions to be negotiated by the SIP user endpoints. Software Defined Networks (SDNs) allow the network resources (routers, switches, hubs) to be virtualized.

SDNs can be made to dynamically route traffic flows based on decisions in real time. The flow of data packets through the network can be controlled in a programmatic manner through the Flow controller using the Openflow protocol. This is very well suited to the IMS architecture. Hence the SDN can allocate flows based on bandwidth, QoS and type of traffic (voice, data or video).

Software Defined Storage (SDS): A key component in the Core Network is the need to be able charge customers. Call Detail Records (CDRs) are generated at various points of the call which are then aggregated and sent to the bill center to generate the customer bill.

Software Defined (SDS) abstracts storage resources and enables pooling, replication, and on-demand provisioning of storage resources. The ability to be able to pool storage resources and allocate based on need is extremely important for the large amounts of data that is generated in Core Networks

Software Defined Radio (SDR): This is another aspect that all Core Networks must adhere to. The advent of mobile broadband has resulted in a mobile data explosion portending a possible spectrum crunch. In order to use the available spectrum efficiently and avoid the spectrum exhaustion Software Define Radio (SDR) has been proposed. SDRs allows the radio stations to hop frequencies enabling the radio stations to use a frequency where this less contention (see We need to think differently about spectrum allocation … now).In the future LTE-Advanced or LTE with CS fallback will have to be designed with SDRs in place.

Conclusion:

A Software Defined IMS makes eminent sense in the light of characteristics of a core network architecture.  Besides ‘cloudifying’ the network elements, the ability to programmatically control the CSCFs, network resources, storage and frequency, will be critical for the IMS. This is a novel idea but well worth a thought!

Find me on Google+

Cloud Computing – Show me the money!

Published in Telecom Lead – Cloud Computing – Show me the money!

A lot has been said about the merits of cloud computing and how it is going to be the technological choice of most enterprises in the not so distant future. But the key question that is bound to keep cropping up in the higher echelons of the enterprise is whether the cloud makes good business sense. While most know that cloud computing adopts a pay-per-use model similar to regular utilities like electricity and water and does away with upfront infrastructure costs to the organization the nagging question to most senior management people is whether cloud computing is prudent choice in the long term.

This is not an easy question to answer and depends on a multitude of factors. The alternative to cloud computing is to have an in-house infrastructure of servers, hardware and software, software licenses, broadband links, firewalls etc. All these will form the Capital Expenditure (CAPEX) for the organization. In addition to these expenses will be the Operational Expenditures (OPEX) of real estate to house the equipment, power supply systems, cooling systems, maintenance personnel, annual maintenance contracts (AMC) etc which will be recurring expenses for the organization.

Cloud Computing does away completely with procurement of hardware, software, databases, licenses etc and an enterprise should be able to host their application in a couple of hours provided they know ahead of time the resources their application will need.

Hence as can be seen while the upfront costs and the running costs of maintaining a data center will be high in comparison to the zero upfront costs of the deploying on the cloud the steeper operational costs of the cloud will eventually catch up with the in-house infrastructure.

Depending on how well the application is designed the point at which the cumulative running costs of the cloud breaks even with in-house data center can be made to occur a couple of years down the line after the application is deployed.  Assuming that the break even happens in 3 years the advantage of cloud deployment is that the enterprise does not have to worry about equipment obsolescence, upgrading of software etc not to mention the depreciation of the equipment costs.

Moreover cloud technology is extremely useful to enterprises which are planning to deploy application in which there is difficulty in forecasting the type of traffic that will be hit their application. Where the traffic may be intermittent, bursty or seasonal then a cloud makes perfect business sense since can it scale up or scale down depending on the traffic.

Some typical applications which are prime candidates for the cloud are CRM software, office tools, testing tools, online retail stores, webmail etc.

One possible worry of the enterprise will be the security concerns while deploying to the public cloud. In such situations the organization can take a hybrid strategy where their sensitive data are hosted in in-house data centers and their main application is hosted on a public cloud.

Hence in most situation cloud deployments do have a definite edge for certain key application of the enterprise.

Find me on Google+

The Business of Cloud Computing

Cloud Computing is the spanking new paradigm in the world of computing. The key differentiator in this technology is that the enterprise only pays for the amount of resources used – be it CPUs, memory or databases. While it does away with Capital Expenditure for organizations by providing a utility model of pricing it results in recurring Operating Expenses for the organization. However the important thing is that the cloud grows and shrinks according to demand and hence the cost to the organization is dependent on the traffic it generates. While web based applications are prime candidates for the cloud other equally eligible candidates are batch processing jobs, nightly builds or CPU intensive analytics. Except for the case of web application, for other types of applications, a reasonable estimate can be made on the resources needed and appropriate choice be made on the cloud.

This article looks at web applications where the traffic on the site can be seasonal and can vary during periods of the day. Besides web sites should be capable of handling bursty traffic with enormous loads at particular intervals.

The important consideration for web sites is to ensure that the application is truly optimized and exhibits the property of scaling horizontally. While it appears that scaling out will occur for any reasonably designed application the issue is that as the number of hits increase on the web site the response time increases steeply but the number of transactions per second plateaus at some particular load level and does not increase after that. It can be said that for a certain CPU instance configuration the peak transaction per second will reach a particular limit and cannot be increased any further. However the cloud also provides a key component namely the load balancer along with auto scaling which create a new instances when this threshold is reached.

What are the business considerations that need to be taken while designing for the cloud?

One needs to be conservative in choosing the instance type. While larger instances will provide a better performance they also cost more. Hence the instance type should be large enough and no larger. It would be wasteful of using extremely large instances where the last instance only uses a part of the total traffic while costing a lot more.

The analogy is that if 16 units if task have to be performed it is better to have a small CPU instance capable of handling 3 units of task requiring a total of 6 CPUs (6 * 3 = 18 > 16) rather than having a large CPU instance capable of handling 5 units of task requiring a total of 4 large CPUs (5 * 4= 20> 16). The second option would result in a waste processing power.

Assuming that the upfront cost to the organization for hosting the website in-house is ‘P’ and the cost amortized over a period of 1 years is ‘p’ per hour. Further if the instance cost is ‘c’ and ‘n’ is number of instances needed to support the projected demand and the revenue to the organization hosting the website is ‘r’ per 1000 hits then a cloud deployment will make business sense when

(rh– n * ch) – ph > 0 where h is the hour

As long as the right hand side is positive the organization will profit. However as the traffic increases and the throughput of website plateaus the enterprise will hit a ‘window of diminishing returns’.

However if the performance of the application is poor and the number of instances needed to support the traffic is disproportionately large then the above equation will be negative and will result in loss to the organization.

(rh – n * ch) – ph < 0

Hence deployment to the cloud besides requiring a strong technical background also needs a sound business sense in order to reap the benefits of the cloud.

Find me on Google+

Cloud, analytics key tools for today’s telcos

Published in Telecom Asia Aug 20, 2010 – http://bit.ly/dxKbsR

Operators facing dwindling revenue from wireline subscribers, fierce tariff wars and exploding mobile data traffic are continually being pressured to do more for less. Spending on infrastructure is increasing as they look to provide better service within slender budgets.
In these tough times telcos have to devise new and innovative strategies and make judicious technology choices. Two promising technologies, cloud computing and analytics, are shaping up as among the best choices to make.
Cloud architecture does away with the worry of planning the computing resources needed, the real estate, the costs of the acquiring them and thoughts of its obsolescence. It allows the CSPs to purchase processing power, platforms and databases almost as a utility like electricity or water.
Cloud consumers only pay for what they use. The magic of this promising technology is the elasticity that the cloud provides – it expands to accommodate increasing demands and contracts when the demand drops.
The cloud architectures of Amazon, Google and Microsoft – currently the three biggest cloud providers – vary widely in their capabilities and features. These strengths and weaknesses should be taken into account while planning a cloud system. Each is best suited for only a certain class of applications unique to each individual cloud provider.
On one end of the spectrum Amazon’s EC2 (Elastic Compute Cloud) provides a virtual machine and a wealth of associated tools for storage and notifications. But the trade-off for increased flexibility is that users must take responsibility for designing resiliency into their systems.
On the other end is Google’s App Engine, a highly scalable cloud architecture that handles failures but is a lot more restrictive. Microsoft’s Azure is based on the .NET architecture and in terms of flexibility and features lies between these two.
When implementing such architecture, an organization should take a long hard look its computing software inventory to decide which applications are worthy of migrating to the cloud. The best candidates are processing intensive in-house applications that deliver standardized functionality and interface, and whose software architecture is made up of loosely coupled communicating systems.
Applications that deal with sensitive data should be retained within the organization’s internal computing infrastructure, because security is currently the most glaring issue with the cloud. Cloud providers do provide various levels of security to users, but this is an area in keen need of standardization.
But if the CSP decides to build components of an OSS system – rather than buying a pre-packaged system – it makes good business sense to develop for the cloud.
A cloud-based application must have a few essential properties. First, it is preferable if the application was designed on SOA principles. Second, it should be loosely coupled. And lastly, it needs to be an application that can be scaled rapidly up or down based on the varying demands.
The other question is which legacy systems can be migrated. If the OSS/BSS systems are based on commercial off-the-shelf systems these can be excluded, but an offline bill processing system, for example, is typically a good candidate for migration.
Mining wisdom from data
The cloud can serve as the perfect companion for another increasingly vital operational practice – data analytics. The cloud is capable of modeling large amounts of data, and running models to process and analyze this data. It is possible to run thousands of simultaneous instances on the cloud and mine for business intelligence in the oceans of telecom data operators generate.
Today’s CSP maintains software systems generating all kinds of customer data, covering areas ranging from billing and order management to POS, VAS and provisioning. But perhaps the largest and richest vein of subscriber information is the call detail records database.
All this data is worthless, though, if it cannot be mined and analyzed. Formal data mining and data analytics tools can be used to identify patterns and trends that will allow operators to make strategic, knowledge-driven decisions.
Analytics involves many complex areas like predictive analytics, neural nets, decision trees and classification. Some of the approaches used in data analytics include prediction, deviation detection, degree of influence and classification.
With the intelligence that comes through analytics it is possible to determine customer buying patterns, identify causes for churn and develop strategies to promote loyalty. Call patterns based on demography or time of day will enable the CSPs to create innovative tariff schemes.
Determining the relations and buying patterns of users will provide opportunities for up-selling and cross-selling. The ability to identify marked deviation in customer behavior patterns help the CSP in deciding ahead of time whether this trend is a warning bell or an opportunity waiting to be tapped.
Tinniam V Ganesh

Find me on Google+