Into the Telecom vortex

“Ten little Indian boys went out to dine,
One choked his little self and then there were nine
Nine little Indian boys sat up very late;
One overslept himself and then there were eight…”

From the poem “Ten Little Indians”

a

You don’t need to be particularly observant to notice that the telecom landscape over the last decade and a half is full of dead organizations, bloodshed and gore. Organizations have been slain by ruthless times and bigger ones have devoured the weaker, fallen ones. Telecom titans have vanished, giants have been reduced to dwarfs.

Some telecom companies have merged in a deadly embrace trying to beat the market forces only to capitulate to its inexorable death march.

The period from the early 1980s to the late 1990’s were the glorious periods for telecommunication. Digital switches (1972-1982), ISDN (1988), international calling, trunk protocols, mobile (~1991), 2G, 2.5G, and 3G moved in succession, one after another.

Advancement came after advancement. The future had never looked so bright for telecom companies.

The late 1990’s were heady years, not just for telecom companies, but to all technology companies. Stock prices soared. Many stocks were over-valued.  This was mainly due to what was described as the ‘irrational exuberance’ of the stock market.

Lucent, Alcatel, Ericsson, Nortel Networks, Nokia, Siemens, Telecordia all ruled supreme.

1997-2000. then the inevitable happened. There was the infamous dot-com bust of the 2000 which sent reduced many technology stocks to penny stocks. Telecom company stocks went into a major tail spin.  Stock prices of telecom organizations plummeted. This situation, many felt, was further exacerbated by the fact that nothing important or earth shattering was forth-coming from the telecom. In other words, there was no ‘killer app’ from the telecommunication domain.

From 2000 onwards 3G, HSDPA, LTE etc. have all come and gone by. But the markets were largely unimpressed. This was also the period of the downward slide for telecom. The last decade and a half has been extra-ordinarily violent. Technology units of dying organizations have been cannibalized by the more successful ones.

Stellar organizations collapsed, others transformed into ‘white dwarfs’, still others shattered with the ferocity of a super nova.

Here is a short recap of the major events.

  • 2006 – After a couple of unsuccessful attempts Alcatel and Lucent finally decide to merge
  • 2006 – Nokia marries Siemens in a 20 billion Euro deal. N
  • 2009-10 – Ericsson purchases Nortel’s CDMA and LTE business for $1.13 billion
  • 2009-10 – Nortel implodes
  • 2010 – Motorola sells networking unit to Nokia for $1.2 Billion
  • 2011 – Internet giant Google mops up Motorola’s handset division for $12.5 billion, largely for the patents
  • 2012 – Ericsson closes a deal with Telcordia for $1.15 billion
  • 2013 – Nokia sells its handset division to Microsoft after facing a serious beating from smartphones
  • 2015 – Nokia agrees to a $16.6 billion takeover of Alcatel Lucent

And so the story continues like the rhyme in Agatha Christie’s mystery novel

And then there were none

Ten little Indian boys went out to dine,                                                                                                                
One choked his little self and then there were nine…”

The Telecom companies continue their search for the elusive ‘killer app’ as progress comes in small increments – 3G, 3.5G, 3.75G, 4G, and 5G etc.

Personally I think the future of Telecom companies, lies in its ability to embrace the latest technologies of Cloud Computing, Big Data, Software Defined Networks, and Software Defined Datacenters and re-invent themselves. Rather than looking for some elusive ‘killer app’ they have to re-enter the technology scene with a Big Bang

As I referred to in one of my earlier posts “Architecting a cloud Based IP Multimedia System” the proverbial pot at the end of the rainbow may be in

  1. Virtualizing IP Multimedia Switches (IMS) namely the CSCFs (P-CSCF, S-CSCF, I-CSCF etc.),
  2. Using the features of the cloud like Software Defined Storage (SDS) , Load balancers and auto-scaling to elastically scale-up or scale down the CSCF instances to handle varying ‘call traffic’
  3. Having equipment manufacturers (Nokia, Ericsson, and Huawei) will have to use innovating pricing models with the carriers like AT&T, MCI, Airtel or Vodafone. Instead of a one-time cost for hardware and software, the equipment manufacturers will need to charge based on usage or call traffic (utility charging). This will be a win-win for both the equipment manufacturer and carrier
  4. Using SDN to provide the necessary virtualized pipes between users with the necessary policies for advanced services like video-chat, white-boarding, real-time gaming etc.
  5. Using Big Data and Hadoop to analyze Call Detail Records (CDRs) and provide advanced services to customers like differential rates for calls etc

Clearly there will be challenges in this virtualized view of things. Telecom equipment is renowned for its 5 9’s availability. The challenge will be achieving this resiliency, high availability and fault-tolerance with cloud servers. How can WAN latencies be mitigated? How to can SDN provide the QoS required for voice, video and data traffic in IMS?

IMS has many interesting services where video calls from laptops can be transferred as data calls to mobile phones and vice versa, from mobile networks to WiFi  and so on.

Many hurdles will have to be crossed. But this is, in my opinion, will be the path forward.

While the last decade and a half have been bad for the telecom industry, I personally feel we are on the verge on the next big breakthrough in telecom in the next year or two. Telecom will rise like the phoenix from its ashes in the next couple of years

Also see
1. A crime map of India in R: Crimes against women
2.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
3.  Bend it like Bluemix, MongoDB with autoscaling – Part 2
4. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
5. Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data
6. Deblurring with OpenCV:Weiner filter reloaded

Introducing the Software Defined Computing Pattern

We are on the verge of a new ‘Software Defined’ revolution. The phrase ‘software defined’ refers to the ability to be able to programmatically control computing elements namely compute, storage, network. We are entering into a bold, brave ‘software defined’ era. Before we delve into the ‘whats’ of this revolution I would rather like to outline the ‘whys’. What motivated this new thinking in computing?

Why “Software Defined’?

In the late 90s, IT infrastructure was unwieldy and unmanageable, Whenever new IT infrastructure had to be procured there was the need to accurately size the required hardware infrastructure, software, software licenses, routers, switches and storage elements The problem in those days had to do with dimensioning. The CIO and IT managers had to be able to calculate the requisite hardware, and software elements. The problem was that if the estimate was too conservative the infrastructure would be under-dimensioned and would not be able to handle the load. On the other hand if it was over-dimensioned then hardware and software would lie idle and would result in a wasted resources and money. So it used to be a fine balancing act. Even if the IT managers got lucky and got the size right, it is quite likely that conditions in the enterprise changed resulting in them having to take a relook at their infrastructure.

This problem of dimensioning IT infrastructure was effectively solved by a technology called ‘virtualization’. In the mid 1960s IBM created a CP-67 Mainframe computer, which had the elements of virtualization. Much later in 1998, VMWare created the VMWare workstation that could run multiple Operating Systems (OS’es). In essence virtualization abstracts the hardware of the computer, storage and network ports through a software known as the hypervisor. Over the hypervisor, the user can run any operating system like Windows, Linux, AIX etc. These OS’es which run on top of the hypervisor are known as guest OS’es. Besides, virtualization technology, enables different virtual servers to share one physical server. This process, called server consolidation, helps to increase hardware utilization, load balancing, and optimization of the IT resources.

The ability to virtualize the computer hardware really triggered some major advancements in computing. Prior to virtualization each server would run a single OS with a single application resulting in the server being idle for close to 60% of the time. Virtualization now made it possible for enterprises to run several OS’es each with its own application on a single computer. Hence the computing resources were used more effectively and efficiently. This is shown below

a

Virtualization and the dotcom bust around the year 2000 effectively paved the way for a ‘Software Defined’ future. In others words there was a need to control resources programmatically aimed at more efficient utilization of the resources.

The move to the Cloud: Prior to the advent of the cloud, enterprises hosted their applications in their internal IT infrastructure with virtualization technology. With the pay-per-use, utility style computing, spearheaded by the likes of Amazon, many enterprises moved their applications to shared, multi-tenant (multiple customers) , 3rd party hosting service provider, also known as the cloud providers

With the advent of Cloud Computing the software defined era made major advances. Here is the reason why. Computing as such stands on 3 main pillars- computing, storage and networking.

As mentioned earlier in the post, one of the thorny issues in procuring & managing IT infrastructure is the problem of dimensioning or right sizing. Virtualization did solve this problem to some extent but there was a need to provide more control to the user. This is where the ‘Software Defined’ technologies emerged. This ‘Software Defined’ paradigm is based on prudence and sound engineering judgment. The whole premise of making anything ‘software defined’ is to ensure that resources allocated for any task (computing, storage or networking) are optimal. The idea is that resources should be allocated exactly as needed and released and included into a shared, common pool, when idle. Hence we have the advent of

  • Software Defined Compute
  • Software Defined Storage
  • Software Defined Network

Software Defined Compute (SDC): In the clouds these days it is possible to precisely control the computing elements that will make up your application. So you can choose your CPU type, CPU speed, hypervisor, OS, RAM size, disks etc. You can also provision your application to expand or contract elastically to the demands of the times rather than under-provisioning or over-provisioning, This is done through a process called auto scaling. The desired configuration can be controlled through APIs provided by the Cloud Provider.

Software Defined Storage (SDS): There are multiple storage technologies that span DAS, SATA drives, SAN and NAS storage. These different storage technologies address different needs of price, storage capacity and performance, The Software Defined Storage allows the user to control the type of storage that is needed for the application through software APIs. In storage the initial allocation to each application is rather conservative. Additional storage is assigned from a common pool of storage to the applications that needs it the most. Once the storage is no longer needed it is reclaimed.

Software Defined Network(SDN): SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate. Software Defined Networks (SDN) decouples the routing and switching of the data flows and moves the control of the flow to a separate network element namely, the flow controller.   The motivation for this is that the flow of data packets through the network can be controlled in a programmatic manner allowing for multiple data streams to flow over the communicating paths with each stream individually defined for speed, latency, QoS etc.

Software Defined Datacenter (SDDC): A datacenter has racks and racks of servers, storage boxes, and networking equipment. A datacenter where one is able to provision, manage and operate these equipment through APIs or through programs is a Software Defined Datacenter. Imagine being able to put together a car with the body of a BMW, the interior of a Merc, the engine of a Ferrari and the electronics of a Tesla! That is what a SDDC allows you to do!

Software Defined Computing Pattern (SDCP): Once the SDC, SDS and SDN reach a level of maturity I think the next logical step would be a move to Software Defined Computing Patterns. This is what I am implying by this. Theoretically we can reduce the different types of enterprise applications to a set of computing patterns for e.g. e-commerce, social network, email server, Web portal etc. The Software Defined Computing Pattern would allow the user to choose a computing pattern based on the enterprise application. This would result in the setting up of the appropriate computing resources, storage resources, middleware and networking elements in a cloud. . The user would them need to host their applications on this environment. Here is a good link to cloud patterns.

In this context I would like to bring to your notice that there is another parallel trend called Software Defined Architecture (SDA) coined by Gartner in 2014. The SDA Gateway is responsible for virtualizing the internal API, protocols and models used to external API, User Interface and resources. Here is a diagram of SDA

sda-2

The pace of progress in the last couple of years has been really scorching. The ability to have solve most large problem through a Software Defined Computing Pattern is sure to happen.