Into the Telecom vortex

“Ten little Indian boys went out to dine,
One choked his little self and then there were nine
Nine little Indian boys sat up very late;
One overslept himself and then there were eight…”

From the poem “Ten Little Indians”

a

You don’t need to be particularly observant to notice that the telecom landscape over the last decade and a half is full of dead organizations, bloodshed and gore. Organizations have been slain by ruthless times and bigger ones have devoured the weaker, fallen ones. Telecom titans have vanished, giants have been reduced to dwarfs.

Some telecom companies have merged in a deadly embrace trying to beat the market forces only to capitulate to its inexorable death march.

The period from the early 1980s to the late 1990’s were the glorious periods for telecommunication. Digital switches (1972-1982), ISDN (1988), international calling, trunk protocols, mobile (~1991), 2G, 2.5G, and 3G moved in succession, one after another.

Advancement came after advancement. The future had never looked so bright for telecom companies.

The late 1990’s were heady years, not just for telecom companies, but to all technology companies. Stock prices soared. Many stocks were over-valued.  This was mainly due to what was described as the ‘irrational exuberance’ of the stock market.

Lucent, Alcatel, Ericsson, Nortel Networks, Nokia, Siemens, Telecordia all ruled supreme.

1997-2000. then the inevitable happened. There was the infamous dot-com bust of the 2000 which sent reduced many technology stocks to penny stocks. Telecom company stocks went into a major tail spin.  Stock prices of telecom organizations plummeted. This situation, many felt, was further exacerbated by the fact that nothing important or earth shattering was forth-coming from the telecom. In other words, there was no ‘killer app’ from the telecommunication domain.

From 2000 onwards 3G, HSDPA, LTE etc. have all come and gone by. But the markets were largely unimpressed. This was also the period of the downward slide for telecom. The last decade and a half has been extra-ordinarily violent. Technology units of dying organizations have been cannibalized by the more successful ones.

Stellar organizations collapsed, others transformed into ‘white dwarfs’, still others shattered with the ferocity of a super nova.

Here is a short recap of the major events.

  • 2006 – After a couple of unsuccessful attempts Alcatel and Lucent finally decide to merge
  • 2006 – Nokia marries Siemens in a 20 billion Euro deal. N
  • 2009-10 – Ericsson purchases Nortel’s CDMA and LTE business for $1.13 billion
  • 2009-10 – Nortel implodes
  • 2010 – Motorola sells networking unit to Nokia for $1.2 Billion
  • 2011 – Internet giant Google mops up Motorola’s handset division for $12.5 billion, largely for the patents
  • 2012 – Ericsson closes a deal with Telcordia for $1.15 billion
  • 2013 – Nokia sells its handset division to Microsoft after facing a serious beating from smartphones
  • 2015 – Nokia agrees to a $16.6 billion takeover of Alcatel Lucent

And so the story continues like the rhyme in Agatha Christie’s mystery novel

And then there were none

Ten little Indian boys went out to dine,                                                                                                                
One choked his little self and then there were nine…”

The Telecom companies continue their search for the elusive ‘killer app’ as progress comes in small increments – 3G, 3.5G, 3.75G, 4G, and 5G etc.

Personally I think the future of Telecom companies, lies in its ability to embrace the latest technologies of Cloud Computing, Big Data, Software Defined Networks, and Software Defined Datacenters and re-invent themselves. Rather than looking for some elusive ‘killer app’ they have to re-enter the technology scene with a Big Bang

As I referred to in one of my earlier posts “Architecting a cloud Based IP Multimedia System” the proverbial pot at the end of the rainbow may be in

  1. Virtualizing IP Multimedia Switches (IMS) namely the CSCFs (P-CSCF, S-CSCF, I-CSCF etc.),
  2. Using the features of the cloud like Software Defined Storage (SDS) , Load balancers and auto-scaling to elastically scale-up or scale down the CSCF instances to handle varying ‘call traffic’
  3. Having equipment manufacturers (Nokia, Ericsson, and Huawei) will have to use innovating pricing models with the carriers like AT&T, MCI, Airtel or Vodafone. Instead of a one-time cost for hardware and software, the equipment manufacturers will need to charge based on usage or call traffic (utility charging). This will be a win-win for both the equipment manufacturer and carrier
  4. Using SDN to provide the necessary virtualized pipes between users with the necessary policies for advanced services like video-chat, white-boarding, real-time gaming etc.
  5. Using Big Data and Hadoop to analyze Call Detail Records (CDRs) and provide advanced services to customers like differential rates for calls etc

Clearly there will be challenges in this virtualized view of things. Telecom equipment is renowned for its 5 9’s availability. The challenge will be achieving this resiliency, high availability and fault-tolerance with cloud servers. How can WAN latencies be mitigated? How to can SDN provide the QoS required for voice, video and data traffic in IMS?

IMS has many interesting services where video calls from laptops can be transferred as data calls to mobile phones and vice versa, from mobile networks to WiFi  and so on.

Many hurdles will have to be crossed. But this is, in my opinion, will be the path forward.

While the last decade and a half have been bad for the telecom industry, I personally feel we are on the verge on the next big breakthrough in telecom in the next year or two. Telecom will rise like the phoenix from its ashes in the next couple of years

Also see
1. A crime map of India in R: Crimes against women
2.  What’s up Watson? Using IBM Watson’s QAAPI with Bluemix, NodeExpress – Part 1
3.  Bend it like Bluemix, MongoDB with autoscaling – Part 2
4. Informed choices through Machine Learning : Analyzing Kohli, Tendulkar and Dravid
5. Thinking Web Scale (TWS-3): Map-Reduce – Bring compute to data
6. Deblurring with OpenCV:Weiner filter reloaded

Where is the Cloud Computing bus going?

delorean_19813Technological innovation patterns have often repeated themselves in history. So it is with Cloud Computing. Familiar patterns of change seem to emerge today

Here are some of main trends that I see in Cloud Computing

Advent of containers: Containers are the new hot topic in cloud computing. In virtualization guest OS’es run separately. Running separate guest OS over the hypervisor is associated with a lot of overhead for each of the heavy weight OS’es. Containers can be used as an alternative to OS-level virtualization to run multiple isolated systems on a single host. Containers within a single operating system are much more efficient being light weight while being able to provide the same level of isolation. Containers run the same kernel as the host. Here is an interesting article on containers Containers, not virtual machines are the future of the cloud.

In many ways this containers over VM innovation pattern is reminiscent of the advantages of lightweight ‘threads’ over the heavy and slow ‘process’ approach in the OS world.  It is inevitable that containers will eventually score over VMs

Open ‘something’ over proprietary’ness: Technology over the decades has always moved into an ‘open’ approach over proprietary solutions. Hence, for example, we have OpenStack for creating instances, provisioning storage, network to do many things that are being done separately by VMWare, Citrix, Hyper-V. The intent is to have a common approach over several disparate approaches. In the networking world there is OpenFlow which tries to have a uniform interface to the many different standards maintained by the Ciscos, Junipers and Brocades of the world.  There are also other technologies like OpenCV (Computer Vision processing), Open VPN (VPN protocol) etc. In all these approaches there is either to move to unify or to provide a layer over and above the disparate approaches.  I am not sure whether Openstack will prevail, only time will tell. I personally think we will move to a level abstraction that will be even above that of Open Stack.

Software Defined Everything: Cloud Computing started with the need to be able to provision computing resources through a user interface or the Web portal. This was made possible, thanks to virtualization. Users could now define and request computing resources. Soon this led to the need for being able to programmatically request storage. The trick in storage is to do ‘thin-provisioning’ or to provision resources that barely satisfies the needs of the application. The application will be able to request more storage programmatically. Not to be outdone, networking followed suit when Software Defined Networking became a reality when Stanford and University of California came with the Open Flow protocol. We have now entered into the era of Software Defined Datacenter. This is a dominant theme in Cloud Computing.

These are some of the predominant trends that are emerging in the Cloud Computing arena.

I have spent more than 2 decades of my career in telecom, implementing telecom protocols, starting in the mid-1980s. The mid 1980s was the time when digital switches started to emerge. This was followed by a spate of protocols and dizzying innovations like mobile telephony, ISDN, Intelligent Networks, Softswitch, UMTS,3G, HSDPA, LTE etc.

I personally think that Cloud computing, to use a very frayed and hackneyed term, is at a similar ‘inflexion point’. Trends are emerging and we will soon be caught in the maelstrom of rapid change and innovation.

In this post I am going to do a Marty McFly of the ‘Back to Future’ trilogy. I am going to set the clock of the Delorean DMC-12 to 2020 and ‘Whoosh…..’

21 Apr 2020:

It is 21 Apr 2020 and a sunny day.  Here is a look at the Cloud Computing landscape

  • The Organization of Cloud Computing Standards (OCCS) now sets and governs the standards for all Cloud Providers of the world
  • Common APIs govern provisioning of instances on the cloud regardless of the Cloud Provider. Instances are defined by RPE values, RAM and IOPS, LB, DNS requirements
  • Networking bandwidth, security and storage are also standards based
  • Enterprises use a ‘diffuse deployment’ strategy where the organization’s workloads are deployed to multiple cloud providers.
  • Workloads are Cloud Provider agnostic.
  • Enterprise applications themselves may span multiple cloud providers for e.g. the e-commerce in Cloud Provider 1, Analytics on HPC instances on Cloud Provider 2 and secure applications on Private Cloud of Cloud Provider 3. Appropriate contracts are maintained between the Cloud Providers for charging for the usage.
  • Algorithms are used by enterprises to deploy workloads to cloud providers. The algorithms match the SLA and cost requirements of the application with those offered by the cloud provider to minimize the cost while meeting the SLA requirements of the applications.
  • Compute, storage and networking costs fluctuate and enterprises use algorithms to optimize the deployment of workloads. Workloads are migrated to take advantage of these price changes
  • Consolidation and acquisitions happen at an alarming pace. Cloud providers, storage, network and HPC providers aslo compete fiercely
  • Cloud providers are swallowed by others and some lose out. The battle scene is bloody

Time to get back to Delorean. This time the clock on Delorean is set to 2025

18 Sep 2025

Today it is 18 Sep 2025, and it is sunny again, coincidentally.

  • Cloud Computing is dead, mate. These days technology has moved to ‘Cloud Computing in a box’.
  • The technology of these times are ‘Haze works’ where the computation happens in the stratosphere over the ether …

So much for looking into the future. It is now time to get back to the reality of VMs

Introducing the Software Defined Computing Pattern

We are on the verge of a new ‘Software Defined’ revolution. The phrase ‘software defined’ refers to the ability to be able to programmatically control computing elements namely compute, storage, network. We are entering into a bold, brave ‘software defined’ era. Before we delve into the ‘whats’ of this revolution I would rather like to outline the ‘whys’. What motivated this new thinking in computing?

Why “Software Defined’?

In the late 90s, IT infrastructure was unwieldy and unmanageable, Whenever new IT infrastructure had to be procured there was the need to accurately size the required hardware infrastructure, software, software licenses, routers, switches and storage elements The problem in those days had to do with dimensioning. The CIO and IT managers had to be able to calculate the requisite hardware, and software elements. The problem was that if the estimate was too conservative the infrastructure would be under-dimensioned and would not be able to handle the load. On the other hand if it was over-dimensioned then hardware and software would lie idle and would result in a wasted resources and money. So it used to be a fine balancing act. Even if the IT managers got lucky and got the size right, it is quite likely that conditions in the enterprise changed resulting in them having to take a relook at their infrastructure.

This problem of dimensioning IT infrastructure was effectively solved by a technology called ‘virtualization’. In the mid 1960s IBM created a CP-67 Mainframe computer, which had the elements of virtualization. Much later in 1998, VMWare created the VMWare workstation that could run multiple Operating Systems (OS’es). In essence virtualization abstracts the hardware of the computer, storage and network ports through a software known as the hypervisor. Over the hypervisor, the user can run any operating system like Windows, Linux, AIX etc. These OS’es which run on top of the hypervisor are known as guest OS’es. Besides, virtualization technology, enables different virtual servers to share one physical server. This process, called server consolidation, helps to increase hardware utilization, load balancing, and optimization of the IT resources.

The ability to virtualize the computer hardware really triggered some major advancements in computing. Prior to virtualization each server would run a single OS with a single application resulting in the server being idle for close to 60% of the time. Virtualization now made it possible for enterprises to run several OS’es each with its own application on a single computer. Hence the computing resources were used more effectively and efficiently. This is shown below

a

Virtualization and the dotcom bust around the year 2000 effectively paved the way for a ‘Software Defined’ future. In others words there was a need to control resources programmatically aimed at more efficient utilization of the resources.

The move to the Cloud: Prior to the advent of the cloud, enterprises hosted their applications in their internal IT infrastructure with virtualization technology. With the pay-per-use, utility style computing, spearheaded by the likes of Amazon, many enterprises moved their applications to shared, multi-tenant (multiple customers) , 3rd party hosting service provider, also known as the cloud providers

With the advent of Cloud Computing the software defined era made major advances. Here is the reason why. Computing as such stands on 3 main pillars- computing, storage and networking.

As mentioned earlier in the post, one of the thorny issues in procuring & managing IT infrastructure is the problem of dimensioning or right sizing. Virtualization did solve this problem to some extent but there was a need to provide more control to the user. This is where the ‘Software Defined’ technologies emerged. This ‘Software Defined’ paradigm is based on prudence and sound engineering judgment. The whole premise of making anything ‘software defined’ is to ensure that resources allocated for any task (computing, storage or networking) are optimal. The idea is that resources should be allocated exactly as needed and released and included into a shared, common pool, when idle. Hence we have the advent of

  • Software Defined Compute
  • Software Defined Storage
  • Software Defined Network

Software Defined Compute (SDC): In the clouds these days it is possible to precisely control the computing elements that will make up your application. So you can choose your CPU type, CPU speed, hypervisor, OS, RAM size, disks etc. You can also provision your application to expand or contract elastically to the demands of the times rather than under-provisioning or over-provisioning, This is done through a process called auto scaling. The desired configuration can be controlled through APIs provided by the Cloud Provider.

Software Defined Storage (SDS): There are multiple storage technologies that span DAS, SATA drives, SAN and NAS storage. These different storage technologies address different needs of price, storage capacity and performance, The Software Defined Storage allows the user to control the type of storage that is needed for the application through software APIs. In storage the initial allocation to each application is rather conservative. Additional storage is assigned from a common pool of storage to the applications that needs it the most. Once the storage is no longer needed it is reclaimed.

Software Defined Network(SDN): SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate. Software Defined Networks (SDN) decouples the routing and switching of the data flows and moves the control of the flow to a separate network element namely, the flow controller.   The motivation for this is that the flow of data packets through the network can be controlled in a programmatic manner allowing for multiple data streams to flow over the communicating paths with each stream individually defined for speed, latency, QoS etc.

Software Defined Datacenter (SDDC): A datacenter has racks and racks of servers, storage boxes, and networking equipment. A datacenter where one is able to provision, manage and operate these equipment through APIs or through programs is a Software Defined Datacenter. Imagine being able to put together a car with the body of a BMW, the interior of a Merc, the engine of a Ferrari and the electronics of a Tesla! That is what a SDDC allows you to do!

Software Defined Computing Pattern (SDCP): Once the SDC, SDS and SDN reach a level of maturity I think the next logical step would be a move to Software Defined Computing Patterns. This is what I am implying by this. Theoretically we can reduce the different types of enterprise applications to a set of computing patterns for e.g. e-commerce, social network, email server, Web portal etc. The Software Defined Computing Pattern would allow the user to choose a computing pattern based on the enterprise application. This would result in the setting up of the appropriate computing resources, storage resources, middleware and networking elements in a cloud. . The user would them need to host their applications on this environment. Here is a good link to cloud patterns.

In this context I would like to bring to your notice that there is another parallel trend called Software Defined Architecture (SDA) coined by Gartner in 2014. The SDA Gateway is responsible for virtualizing the internal API, protocols and models used to external API, User Interface and resources. Here is a diagram of SDA

sda-2

The pace of progress in the last couple of years has been really scorching. The ability to have solve most large problem through a Software Defined Computing Pattern is sure to happen.

Towards an auction-based Internet

The post below was quoted and discussed extensively in (see the link) GigaOM, 14 Jan 2011 – Software Defined Networks could create an auction-based bazaar.

Published in Telecom Asia, Jan 13,2012 – Towards an auction-based internet

Are we headed to an auction-based Internet? This train of thought (no pun intended), which struck me while I was travelling from Chennai to Bangalorelast evening, was the result of the synthesis  of different ideas and technologies which I had read  in the recent past.

The current state of technology and the technology trends do seem to indicate such a possibility.  An auction-based internet would be a business model in which bandwidth would be allocated to different data traffic on the internet based on dynamic bidding by different network elements. Such an eventuality is a distinct possibility considering the economics and latencies involved in data transfer, the evolution of the smart grid concept and the emergence of the promising technology known as the OpenFlow protocol.  This is further elaborated below

Firstly, in the book “Grids, cloud and virtualization”, by Massimo Caforo and Giovanni Aloisio, the authors highlight a typical problem of the computing infrastructure of today. In the book, the authors contend that a key issue in large scale computing is data affinity, which is the result of the dual issues of data latency and the economics of data transfer. They quote, Jim Gray (Turing award in 1998) whose paper on “Distributed Computing Economics” states that that programs need to be migrated to the data on which they operate rather than transferring large amounts of data to the programs.  This is in fact used in the Hadoop paradigm, where the principle of locality is maintained by keeping the programs close to the data on which they operate.

The book highlights another interesting fact. It says “cheapest and fastest way to move a Terabyte cross country is sneakernet (i.e. the transfer of electronic information, especially computer files, by physically carrying removable media such as magnetic tape, compact discs, DVDs, USB flash drives, or external drives from one computer to another). Google used sneakernet to transfer 120 TB of data. The SETI@home also used sneakernet to transfer data recorded by their telescopes inArecibo, Puerto Rico stored in magnetic tapes toBerkeley,California.

It is now a well known fact that mobile and fixed line data has virtually exploded clogging the internet. YouTube, video downloads and other streaming data choke the data pipes of the internet and Service Providers have not found a good way to monetize this data explosion. While there has been a tremendous advancement in CPU processing power (CPU horsepower in the range of petaflops) and enormous increases in storage capacity(of the order of petabytes) coupled with dropping prices,  there has been no corresponding drop in bandwidth prices in relation to the bandwidth capacity.

Secondly, in the book “Hot, flat and crowded” Thomas L. Friedman  describes the “Smart Homes” of the future in which all the home appliances will have sensors and will participate in the energy auction in real time as a part of the Smart Grid.  The price of energy in the Energy Grid fluctuates like stock prices since enterprises are bidding for energy during the day. In his Smart Home, Friedman envisions a situation in which the washing machine will turn on during off-peak hours when the prices of energy in the energy grid is low. In this way all the appliances in the homes of the future will minimize energy consumption by adjusting the cycles accordingly.

Why could not the internet also behave in a similar fashion? The internet pipes get crowded at different periods of the day, during seasons and during popular sporting events. Why cannot we have an intelligent network in place in which price of different data transfer rates vary depending on the time of the day, the type of traffic and the quality of service required.  Could the internet be based on an auction-mechanism in which different devices bid for bandwidth based on the urgency, speed and quality of services required? Is this possible with the routers, switches of today?

The answer is yes. This can be achieved by the new, path breaking innovation known as Software Defined Networks (SDNs) based on the OpenFlow protocol. SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate.  Do read my post Software Defined Networks : A glimpse of tomorrow   for a more detailed look at SDNs. SDNs can be made to dynamically route traffic flows based on decisions in real time.  The flow of data packets through the network can be controlled in a programmatic manner through the OpenFlow protocol. In order to dynamically allocate smaller or fatter pipes for different flows, it necessary for the logic in the Flow Controller to be updated dynamically based on the bid price.

For e.g. we could assume that a corporate has 3 different flows namely, immediate, (ASAP), price below $x. Based on the upper ceiling for the bid price, the OpenFlow controller will allocate a flow for the immediate traffic of the corporation. For the ASAP flow, the corporate would have requested that the flow be arranged when the bid price falls between a range $a – $b. The OpenFlow Controller will ensure that it can arrange for such a flow. The last type of traffic which is not important it will be send during non-peak hours. This will require that the OpenFlow controller be able to allocate different flows dynamically based on winning the auction process that happens in this scheme. The current protocols of the internet of today namely RSVP, DiffServ allocate pipes based on the traffic type & class which is static once allocated. This strategy enables OpenFlow to dynamically adjust the traffic flows based on the current bid price prevailing in that part of the network.

The ability of the OpenFlow protocol to be able to dynamically allocate different flows will once and for all solve the problem of being able to monetize mobile and fixed line data.  Users can decide the type of service they are interested and choose appropriately. This will be a win-win for both the Service Providers and the consumer. The Service Provider will be able to get a ROI for the infrastructure based on the traffic flowing through his network. The consumer rather than paying a fixed access charge could have a smaller charge because of low bandwidth usage.

An auction-based internet is not just a possibility but would also be a worthwhile business model to pursue. The ability to route traffic dynamically based on an auction mechanism in the internet enables the internet infrastructure to be utilized optimally. It will serve the dual purpose of solving traffic congestion, as highest bidders will get the pipe but will also monetize data traffic based on its importance to the end user.

An auction based internet is a very distinct possibility in our future given the promise of the OpenFlow protocol.

All  thoughts, ideas or counter opinions are welcome!

Find me on Google+

Technologies to watch: 2012 and beyond

Published in Telecom Asia – Technologies to watch:2012 and beyond

Published in Telecoms Europe – Hot technologies for 2012 and beyond

A keen observer of the technological firmament, today, will observe a grand spectacle of diverse technological events. Some technological trends will blaze a trail and will become trend setters while others will vanish without a trace. The factors that make certain technologies to endure in comparison to others could be many, ranging from pure necessity to a coolness factor, from innovativeness to a cost factor.  This article looks at some of the technologies that are certain to be trail blazers in the years to come

Software Defined Networks (SDNs):  Software Defined Networks (SDNs) are based on the path breaking paradigm of separating the control of a network flow from the actual flow of data. SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate. Software Defined Networks (SDN) decouples the routing and switching of the data flows and moves the control of the flow to a separate network element namely, the Flow controller.   The motivation for this is that the flow of data packets through the network can be controlled in a programmatic manner. The OpenFlow Protocol has 3 components to it. The Flow Controller that controls the flows, the OpenFlow switch and the Flow Table and a secure connection between the Flow Controller and the OpenFlow switch. Software Define Networks (SDNs) also include the ability to virtualize the network resources. Virtualized network resources are known as a “network slice”. A slice can span several network elements including the network backbone, routers and hosts. The ability to control multiple traffic flows programmatically provides enormous flexibility and power in the hands of users.  SDNs are bound to be the networks elements of the future.

Smart Grids: The energy industry is delicately poised for a complete transformation with the evolution of the smart grid concept. There is now an imminent need for an increased efficiency in power generation, transmission and distribution coupled with a reduction of energy losses. In this context many leading players in the energy industry are coming up with a connected end-to-end digital grid to smartly manage energy transmission and distribution.  The digital grid will have smart meters, sensors and other devices distributed throughout the grid capable of sensing, collecting, analyzing and distributing the data to devices that can take action on them. The huge volume of collected data will be sent to intelligent device which will use the wireless 3G networks to transmit the data.  Appropriate action like alternate routing and optimal energy distribution would then happen. Smart Grids are a certainty given that this technology addresses the dire need of efficient energy management. Smart Grids besides managing energy efficiently also save costs by preventing inefficiency and energy losses.

The NoSQL Paradigm: In large web applications where performance and scalability are key concerns a non –relational database like NoSQL is a better choice to the more traditional relational databases. There are several examples of such databases – the more reputed are Google’s BigTable,   HBase, Amazon’s Dynamo, CouchDB  & MongoDB. These databases partition the data horizontally and distribute it among many regular commodity servers.  Accesses to the data are based on get(key) or set(key, value) type of APIs. Accesses to the data are based on a consistent hashing scheme for example the Distributed Hash Table (DHT) method. The ability to distribute data and the queries to one of several servers provides the key benefit of scalability. Clearly having a single database handling an enormous amount of transactions will result in performance degradation as the number of transaction increases. Applications that have to frequently access and manage petabytes of data will clearly have to move to the NoSQL paradigm of databases.

Near Field Communications (NFC): Near Field Communications (NFC) is a technology whose time has come. Mobile phones enabled with NFC technology can be used for a variety of purposes. One such purpose is integrating credit card functionality into mobile phones using NFC. Already the major players in mobile are integrating NFC into their newer versions of mobile phones including Apple’s iPhone, Google’s Android, and Nokia. We will never again have to carry in our wallets with a stack of credit cards. Our mobile phone will double up as a Visa, MasterCard, etc. NFC also allows retail stores to send promotional coupons to subscribers who are in the vicinity of the shopping mall. Posters or trailers of movies running in a theatre can be sent as multi-media clips when travelling near a movie hall. NFC also allows retail stores to send promotional coupons to subscribers who are  in the vicinity of the shopping mall besides allowing exchanging contact lists with friends when they are close proximity.

The Other Suspects: Besides the above we have other usual suspects

Long Term Evolution (LTE): LTE enables is latest wireless technology that enables wireless access speeds of up to 56 Mbps. With the burgeoning interest in tablets, smartphones with the countless apps LTE will be used heavily as we move along. For a vision of where telecom is headed, do read my post ‘The Future of Telecom“.

Cloud Computing: Cloud Computing is the other technology that is bound to gain momentum in the years ahead. Besides obviating the need for upfront capital expenditure the cloud enables quick and easy deployment of applications. Moreover the elasticity of the cloud will make it irresistible to large enterprises and corporations.

The above is a list of technologies to watch as create new paths and blaze new trails. All these technologies are bound to transform the world as we know it and make our lives easier, better and more comfortable. These are the technologies that we need to focus on as we move bravely into our future. Do read my post for the year 2011 “Technology Trends – 2011 and beyond

Find me on Google+

Software Defined Networks (SDNs): A glimpse of tomorrow

Published in Telecom Asia, Jul 28,2011 – A glimpse into the future of networking

Published in Telecoms Europe, Jul 28 2011 – SDNs are new era for networking

Networks and networking, as we know it, is on the verge of a momentous change, thanks to a path breaking technological concept known as Software Defined Networks (SDN). SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate.

Networks and network elements, of today, have been largely closed and have been based on proprietary architectures. In today’s network and switching and routing of data packets happen in the same network elements for e.g. the router.

Software Defined Networks (SDN) decouples the routing and switching of the data flows and moves the control of the flow to a separate network element namely, the flow controller.   The motivation for this is that the flow of data packets through the network can be controlled in a programmatic manner. A Flow Controller can be typically implemented in a standard PC.  In some ways this is reminiscent of Intelligent Networks and Intelligent Network Protocol which delinked the service logic from the switching and moved it a network element known as the Service Control Point.

The OpenFlow Protocol has 3 components to it. The Flow Controller that controls the flows, the OpenFlow switch and the Flow Table and a secure connection between the Flow Controller and the OpenFlow switch. The OpenFlow Protocol is an open source API specification for modifying the flow table that exists in all routers, Ethernet switches and hubs.  The ability to securely control the flow of traffic programmatically opens ups amazing possibilities.

OpenFlow Specification

Alternatively, existing branded routers can implement the OpenFlow Protocol as an added feature to their existing routers and Ethernet switches. This will enable these routers and Ethernet switches to support both production traffic and research based traffic using the same set of network resources.

The single greatest advantage of separating the control and data plane of network routers and Ethernet switches is the ability to modify and control different traffic flows through a set of network resources. In addition to this benefit Software Define Networks (SDNs) also include the ability to virtualize the network resources. Virtualized network resources are known as a “network slice”. A slice can span several network elements including the network backbone, routers and hosts.

Computing resources can be virtualized through the use of the Hypervisor which abstracts the hardware and enables several guest OS to run in complete isolation. Similarly when a network element a FlowVisor, experimentally demonstrated, is used along with the OpenFlow Controller it is possible to virtualize the network resources. Hence each traffic flow gets a combination of bandwidth, routers, traffic flows and computing resources. Hence Software Defined Networks (SDNs) are also known as Virtualized Programmable Networks owing to the ability of different traffic flows being able to co-exist in perfect isolation of one another allowing for traffic flows through the resources to be controlled by programs in the Flow Controller.

The ability to manage different types of traffic flows across network resources opens up endless possibilities. SDNs have been successfully demonstrated in wireless handoffs between networks and in running multiple different flows through a common set of resources. SDNs in public and private clouds allow appropriate resources to be pooled during different times of the day based on the geographical location of the requests. Telcos could optimize the usage of their backbone network based on peak and lean traffic periods through the Core Network.

The OpenFlow Protocol has already gained widespread support in the industry and has resulted in the formation of the Open Networking Foundation (ONF). The members of ONF include behemoths like Google, Facebook, Yahoo, and Deutsche Telekom to networking giants like Cisco, Juniper, IBM and Brocade etc. Currently the ONF has around 43 member companies

Software Define Networks is a tectonic shift in the way networks operate and truly represent the dawn of a new networking era. A related post of interest is “Adding the OpenFlow variable in the IMS equation

Find me on Google+