Where is the Cloud Computing bus going?

delorean_19813Technological innovation patterns have often repeated themselves in history. So it is with Cloud Computing. Familiar patterns of change seem to emerge today

Here are some of main trends that I see in Cloud Computing

Advent of containers: Containers are the new hot topic in cloud computing. In virtualization guest OS’es run separately. Running separate guest OS over the hypervisor is associated with a lot of overhead for each of the heavy weight OS’es. Containers can be used as an alternative to OS-level virtualization to run multiple isolated systems on a single host. Containers within a single operating system are much more efficient being light weight while being able to provide the same level of isolation. Containers run the same kernel as the host. Here is an interesting article on containers Containers, not virtual machines are the future of the cloud.

In many ways this containers over VM innovation pattern is reminiscent of the advantages of lightweight ‘threads’ over the heavy and slow ‘process’ approach in the OS world.  It is inevitable that containers will eventually score over VMs

Open ‘something’ over proprietary’ness: Technology over the decades has always moved into an ‘open’ approach over proprietary solutions. Hence, for example, we have OpenStack for creating instances, provisioning storage, network to do many things that are being done separately by VMWare, Citrix, Hyper-V. The intent is to have a common approach over several disparate approaches. In the networking world there is OpenFlow which tries to have a uniform interface to the many different standards maintained by the Ciscos, Junipers and Brocades of the world.  There are also other technologies like OpenCV (Computer Vision processing), Open VPN (VPN protocol) etc. In all these approaches there is either to move to unify or to provide a layer over and above the disparate approaches.  I am not sure whether Openstack will prevail, only time will tell. I personally think we will move to a level abstraction that will be even above that of Open Stack.

Software Defined Everything: Cloud Computing started with the need to be able to provision computing resources through a user interface or the Web portal. This was made possible, thanks to virtualization. Users could now define and request computing resources. Soon this led to the need for being able to programmatically request storage. The trick in storage is to do ‘thin-provisioning’ or to provision resources that barely satisfies the needs of the application. The application will be able to request more storage programmatically. Not to be outdone, networking followed suit when Software Defined Networking became a reality when Stanford and University of California came with the Open Flow protocol. We have now entered into the era of Software Defined Datacenter. This is a dominant theme in Cloud Computing.

These are some of the predominant trends that are emerging in the Cloud Computing arena.

I have spent more than 2 decades of my career in telecom, implementing telecom protocols, starting in the mid-1980s. The mid 1980s was the time when digital switches started to emerge. This was followed by a spate of protocols and dizzying innovations like mobile telephony, ISDN, Intelligent Networks, Softswitch, UMTS,3G, HSDPA, LTE etc.

I personally think that Cloud computing, to use a very frayed and hackneyed term, is at a similar ‘inflexion point’. Trends are emerging and we will soon be caught in the maelstrom of rapid change and innovation.

In this post I am going to do a Marty McFly of the ‘Back to Future’ trilogy. I am going to set the clock of the Delorean DMC-12 to 2020 and ‘Whoosh…..’

21 Apr 2020:

It is 21 Apr 2020 and a sunny day.  Here is a look at the Cloud Computing landscape

  • The Organization of Cloud Computing Standards (OCCS) now sets and governs the standards for all Cloud Providers of the world
  • Common APIs govern provisioning of instances on the cloud regardless of the Cloud Provider. Instances are defined by RPE values, RAM and IOPS, LB, DNS requirements
  • Networking bandwidth, security and storage are also standards based
  • Enterprises use a ‘diffuse deployment’ strategy where the organization’s workloads are deployed to multiple cloud providers.
  • Workloads are Cloud Provider agnostic.
  • Enterprise applications themselves may span multiple cloud providers for e.g. the e-commerce in Cloud Provider 1, Analytics on HPC instances on Cloud Provider 2 and secure applications on Private Cloud of Cloud Provider 3. Appropriate contracts are maintained between the Cloud Providers for charging for the usage.
  • Algorithms are used by enterprises to deploy workloads to cloud providers. The algorithms match the SLA and cost requirements of the application with those offered by the cloud provider to minimize the cost while meeting the SLA requirements of the applications.
  • Compute, storage and networking costs fluctuate and enterprises use algorithms to optimize the deployment of workloads. Workloads are migrated to take advantage of these price changes
  • Consolidation and acquisitions happen at an alarming pace. Cloud providers, storage, network and HPC providers aslo compete fiercely
  • Cloud providers are swallowed by others and some lose out. The battle scene is bloody

Time to get back to Delorean. This time the clock on Delorean is set to 2025

18 Sep 2025

Today it is 18 Sep 2025, and it is sunny again, coincidentally.

  • Cloud Computing is dead, mate. These days technology has moved to ‘Cloud Computing in a box’.
  • The technology of these times are ‘Haze works’ where the computation happens in the stratosphere over the ether …

So much for looking into the future. It is now time to get back to the reality of VMs

Maximizing enterprise bandwidth by sharing global enterprise-wide network resources

Here is a slightly wild idea of mine

Introduction

Generally in enterprises the network bandwidth available to employees is greatest at the start of the working day and progressively becomes bad as the day progresses. The bandwidth available improves again later in the day. The reason is that the bandwidth demand is highest during the middle portions of the day usually between 10 am – 5pm when the number of employees are the highest. The poor response times during the day are common to many enterprises as the network infrastructure is stretched beyond its designed capacity

Description

This post looks at a novel technique where the increasing bandwidth is ‘outsourced’ to other geographical regions. In other words the increasing bandwidth requirement is ‘borrowed’ from the network infrastructure from other regions.

This idea visualizes the user of a network devoice (Time Zone –Traffic Director Switch) that borrows bandwidth based on the time-of-day and the bandwidth shortage in a particular region. This device effectively borrows the idle network resources (routers, hubs, switches) from other geographical regions thus resolving the bandwidth issue.

In large global organizations, office are spread throughout the world. There will be typical daytimes and night times.  Night time usages will be minimal unless the office works on shifts

Large enterprises typically use leased lines between their various offices spread across the globe. Each of the regional offices will have its own set of routers, Ethernet switches, hubs etc.

 1

The leased lines and the network infrastructure will experience peak usage during the daytime which will subside during the region’s night times. This idea looks at a scheme in which the leased line and the network infrastructure at region B are utilized during the regions’ night times to supplement the bandwidth crunch in region A’s day time

In a typical situation we can view that the enterprise’s region as the center of the network resources (hubs, routers, switches) in its own region. Simplistically we can represent this as shown in Fig 1

 2

This post visualizes the realization of a network device namely the Time Zone-Traffic Director Switch which effectively borrows bandwidth based on the time-of-day and the bandwidth shortage in a particular region.,

This device effectively borrows the idle network resources (routes, hubs switches) from other geographical regions this resolving the bandwidth issue. Such a network device can be implemented using the Open Flow Protocol which can virtualize the network resources

In large cloud computing systems like Facebook, Google, Amazon, which have data centers spread throughout the world, traffic is directed to a data center closest to the user to minimize latency, In this scheme the use of the network resources in other geographical regions may introduce network latencies but will be economically efficient for the enterprises.

In fact this Time Zone-Traffic Director can be realized through the use of the Open Flow Protocol and Software Defined Networks (SDNs)

Detailed Description

This post utilizes a scheme in which the network resources from another are used to supplement the network resources at region during its daytimes. The leased lines and the network resources in other regions during their nights will typically experience low utilization of both the leased line capacity and the network infrastructure capacity. So in this method the bandwidth resources are borrowed from another region where the utilization of the network resources is low. This supplements the bandwidth in the region which is experiencing a bandwidth shortage. This scheme can be realized by the utilization of the Software Defined Network using the Open Flow protocol.

The Open Flow protocol has a Flow Controller which can virtualizes the network resources. This method visualizes virtualizing the network resources of a corporate office that is spread geographically around the world to optimize the corporate’s resources

This is how the scheme will have to be realized

  1. Over dimension the leased line capacity to be at least 30%-40% more than the total capacity of it Ethernet switches gigabit routers etc.  So if the network resource at a region is 16Gbps then the leased line capacity should be of the order of 20Gpbs
  2. When an external site is to be accessed for high speed connection,  after the DNS resolution the packets must be sent to the Time Zone –  Traffic Director Switch
  3. The Time Zone – Traffic Director Switch will determine the bandwidth availability at that time.  Let us assume that the regional network resources are experiencing peak usage It will not decide to use the network resources of another region
  4. The Time Zone – Traffic Director Switch will now check the time of the day.  If it is the ‘regional day’ (e.g. 10 am – 5pm, when the usages is highest) it will hard code the next hop to be a router in another geographical region. SO if the traffic is to originate in Bangalore, the next hop, if there is bandwidth crunch will be in another a router in Europe, or in the US.
  5. The Time Zone – Traffic Director Switch can be realized through th e use of the Open Flow protocol
  6. By artificially redirecting the traffic to another region all packets will be re-routed to use the leased lines & network resources from the other region thus giving the region, Bangalore, in this case, more bandwidth

This can be represented in the following diagram

 3

In the figure below The Time Zone – Traffic Director Switch uses the regions ‘internal’ network resources since it does not face a bandwidth crunch

 3

In the next figure the Time Zone – Traffic Director Switch borrows the network resource from Region B

5

This scheme will introduce additional latency as opposed to the Shorted Path First (SPF) or the Distance Vector Algorithm (DVA) However the enterprise gets to use its network infrastructure more effectively. This will result in the enterprise’s network infrastructure to be used more efficiently and effectively reducing the CAPEX and OPEX for the entire organization.

The entire scheme can be represented by the flow chart below

 6

Conclusion

The tradeoff in this scheme is between the economics of reusing the network and the leased line infrastructure of the enterprise versus the additional latency introduced

However the delay will be more than compensated by the increase in the bandwidth available to the end user. This idea will simultaneously solve the problem of the bandwidth crunch while efficiently and effectively utilizing the network resources across the entire enterprise. This scheme will also reduce the organization CAPEX and OPEX as resources are used more efficiently.

Thoughts?

Find me on Google+

Towards an auction-based Internet

The post below was quoted and discussed extensively in (see the link) GigaOM, 14 Jan 2011 – Software Defined Networks could create an auction-based bazaar.

Published in Telecom Asia, Jan 13,2012 – Towards an auction-based internet

Are we headed to an auction-based Internet? This train of thought (no pun intended), which struck me while I was travelling from Chennai to Bangalorelast evening, was the result of the synthesis  of different ideas and technologies which I had read  in the recent past.

The current state of technology and the technology trends do seem to indicate such a possibility.  An auction-based internet would be a business model in which bandwidth would be allocated to different data traffic on the internet based on dynamic bidding by different network elements. Such an eventuality is a distinct possibility considering the economics and latencies involved in data transfer, the evolution of the smart grid concept and the emergence of the promising technology known as the OpenFlow protocol.  This is further elaborated below

Firstly, in the book “Grids, cloud and virtualization”, by Massimo Caforo and Giovanni Aloisio, the authors highlight a typical problem of the computing infrastructure of today. In the book, the authors contend that a key issue in large scale computing is data affinity, which is the result of the dual issues of data latency and the economics of data transfer. They quote, Jim Gray (Turing award in 1998) whose paper on “Distributed Computing Economics” states that that programs need to be migrated to the data on which they operate rather than transferring large amounts of data to the programs.  This is in fact used in the Hadoop paradigm, where the principle of locality is maintained by keeping the programs close to the data on which they operate.

The book highlights another interesting fact. It says “cheapest and fastest way to move a Terabyte cross country is sneakernet (i.e. the transfer of electronic information, especially computer files, by physically carrying removable media such as magnetic tape, compact discs, DVDs, USB flash drives, or external drives from one computer to another). Google used sneakernet to transfer 120 TB of data. The SETI@home also used sneakernet to transfer data recorded by their telescopes inArecibo, Puerto Rico stored in magnetic tapes toBerkeley,California.

It is now a well known fact that mobile and fixed line data has virtually exploded clogging the internet. YouTube, video downloads and other streaming data choke the data pipes of the internet and Service Providers have not found a good way to monetize this data explosion. While there has been a tremendous advancement in CPU processing power (CPU horsepower in the range of petaflops) and enormous increases in storage capacity(of the order of petabytes) coupled with dropping prices,  there has been no corresponding drop in bandwidth prices in relation to the bandwidth capacity.

Secondly, in the book “Hot, flat and crowded” Thomas L. Friedman  describes the “Smart Homes” of the future in which all the home appliances will have sensors and will participate in the energy auction in real time as a part of the Smart Grid.  The price of energy in the Energy Grid fluctuates like stock prices since enterprises are bidding for energy during the day. In his Smart Home, Friedman envisions a situation in which the washing machine will turn on during off-peak hours when the prices of energy in the energy grid is low. In this way all the appliances in the homes of the future will minimize energy consumption by adjusting the cycles accordingly.

Why could not the internet also behave in a similar fashion? The internet pipes get crowded at different periods of the day, during seasons and during popular sporting events. Why cannot we have an intelligent network in place in which price of different data transfer rates vary depending on the time of the day, the type of traffic and the quality of service required.  Could the internet be based on an auction-mechanism in which different devices bid for bandwidth based on the urgency, speed and quality of services required? Is this possible with the routers, switches of today?

The answer is yes. This can be achieved by the new, path breaking innovation known as Software Defined Networks (SDNs) based on the OpenFlow protocol. SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate.  Do read my post Software Defined Networks : A glimpse of tomorrow   for a more detailed look at SDNs. SDNs can be made to dynamically route traffic flows based on decisions in real time.  The flow of data packets through the network can be controlled in a programmatic manner through the OpenFlow protocol. In order to dynamically allocate smaller or fatter pipes for different flows, it necessary for the logic in the Flow Controller to be updated dynamically based on the bid price.

For e.g. we could assume that a corporate has 3 different flows namely, immediate, (ASAP), price below $x. Based on the upper ceiling for the bid price, the OpenFlow controller will allocate a flow for the immediate traffic of the corporation. For the ASAP flow, the corporate would have requested that the flow be arranged when the bid price falls between a range $a – $b. The OpenFlow Controller will ensure that it can arrange for such a flow. The last type of traffic which is not important it will be send during non-peak hours. This will require that the OpenFlow controller be able to allocate different flows dynamically based on winning the auction process that happens in this scheme. The current protocols of the internet of today namely RSVP, DiffServ allocate pipes based on the traffic type & class which is static once allocated. This strategy enables OpenFlow to dynamically adjust the traffic flows based on the current bid price prevailing in that part of the network.

The ability of the OpenFlow protocol to be able to dynamically allocate different flows will once and for all solve the problem of being able to monetize mobile and fixed line data.  Users can decide the type of service they are interested and choose appropriately. This will be a win-win for both the Service Providers and the consumer. The Service Provider will be able to get a ROI for the infrastructure based on the traffic flowing through his network. The consumer rather than paying a fixed access charge could have a smaller charge because of low bandwidth usage.

An auction-based internet is not just a possibility but would also be a worthwhile business model to pursue. The ability to route traffic dynamically based on an auction mechanism in the internet enables the internet infrastructure to be utilized optimally. It will serve the dual purpose of solving traffic congestion, as highest bidders will get the pipe but will also monetize data traffic based on its importance to the end user.

An auction based internet is a very distinct possibility in our future given the promise of the OpenFlow protocol.

All  thoughts, ideas or counter opinions are welcome!

Find me on Google+

Adding the OpenFlow variable in the IMS equation

This post has been cited in the Springer  book “The Future Internet” in the chapter Integrating OpenFlow in IMS Networks and Enabling for Future Internet Research and Experimentation. See the 9th reference under References,

IMS a non-starter: IP Multimedia Systems (IMS) has been the grand vision of this decade. Unfortunately it has remained just that, a vision, with sporadic deployments. IMS has been a non-starter in many respects. Operators and Network Providers somehow don’t find any compelling reason to re-architect the network with IMS network elements. There have been no killer applications too.  But IP Multimedia Systems definitely hold enormous potential and a couple of breakthroughs in key technologies can result in the ‘tipping point’ of this great technology which promises access agnostic services including applications like video-conferencing, multi-player gaming, white boarding all using an all-IP backbone. In this context please do read my post “The Case for a Cloud based IMS solution

SDNs, revolutionary: In this scenario a radically new, emerging concept is the Software Define Networks (SDNs). SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate. SDNs consist of the OpenFlow Controller which is able to control network resources in a programmatic manner. These network resources can span routers, hubs and switches, known as a slice, and can be controlled through the OpenFlow Controller. The key aspect of the OpenFlow protocol is the ability to manage slices of virtualized resources from end-to-end. It is this particular aspect of Software Defined Networks (SDNs) and the OpenFlow protocol which can be leveraged in IMS networks. Do read my post Software Defined Networks: A glimpse of tomorrow for more details

This article looks at a way in which OpenFlow protocol can be included in the IMS fabric and provide for QoS in the IMS network. However, please note that this post is exploratory in nature and does not purport to be a well researched article. Nevertheless the idea is well worth mulling over.

QoS in IMS: The current method of ensuring differentiated QoS in IMS networks is through two key network elements, namely the Policy Decision Point (PDP) and the Policy Enforcement Point (PEP). The PDP retrieves the necessary policy rules (flow parameters), in response to a RSVP message, which it then sends to the PEP. The PEP then executes these instructions. In the IMS the Policy Control Function (PCF), in the P-CSCF, plays the role of the PDP. The PEP resides in the GGSN. In an IMS call flow, the SDP message is encapsulated within SIP and carries the QoS parameters. The PCF examines the parameters, retrieves appropriate policies and informs the PEP in the GGSN for that traffic flow.

OpenFlow in P-CSCF: This post looks at a technique in which the OpenFlow Controller can be a part of the P-CSCF. The QoS parameters which come in the SDP message can be similarly examined. Instead of retrieving fixed policy rules, the OpenFlow Controller in the P-CSCF can be made to programmatically identify bandwidth speeds, the routers and the network slice through which the flow should flow. It would then inform the equivalent of the OpenFlow Switch in the GGSN which would control the necessary network resources end-to-end. The advantage of using OpenFlow Controller/OpenFlow switch to the PDP/PEP combination would be the ability to adapt the network flow according to bandwidth changes and traffic. The OpenFlow Controller will be able to dynamically re-route the traffic to alternate network resources, or a different ‘network slice’ in cases of congestion.

Conclusion: In my opinion, adding the OpenFlow Protocol to the IP Multimedia Switching fabric can provide for a much more control and better QoS in the network. It may also be able to provide for a lot more interesting applications to the already existing set of powerful applications

Find me on Google+

Software Defined Networks (SDNs): A glimpse of tomorrow

Published in Telecom Asia, Jul 28,2011 – A glimpse into the future of networking

Published in Telecoms Europe, Jul 28 2011 – SDNs are new era for networking

Networks and networking, as we know it, is on the verge of a momentous change, thanks to a path breaking technological concept known as Software Defined Networks (SDN). SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate.

Networks and network elements, of today, have been largely closed and have been based on proprietary architectures. In today’s network and switching and routing of data packets happen in the same network elements for e.g. the router.

Software Defined Networks (SDN) decouples the routing and switching of the data flows and moves the control of the flow to a separate network element namely, the flow controller.   The motivation for this is that the flow of data packets through the network can be controlled in a programmatic manner. A Flow Controller can be typically implemented in a standard PC.  In some ways this is reminiscent of Intelligent Networks and Intelligent Network Protocol which delinked the service logic from the switching and moved it a network element known as the Service Control Point.

The OpenFlow Protocol has 3 components to it. The Flow Controller that controls the flows, the OpenFlow switch and the Flow Table and a secure connection between the Flow Controller and the OpenFlow switch. The OpenFlow Protocol is an open source API specification for modifying the flow table that exists in all routers, Ethernet switches and hubs.  The ability to securely control the flow of traffic programmatically opens ups amazing possibilities.

OpenFlow Specification

Alternatively, existing branded routers can implement the OpenFlow Protocol as an added feature to their existing routers and Ethernet switches. This will enable these routers and Ethernet switches to support both production traffic and research based traffic using the same set of network resources.

The single greatest advantage of separating the control and data plane of network routers and Ethernet switches is the ability to modify and control different traffic flows through a set of network resources. In addition to this benefit Software Define Networks (SDNs) also include the ability to virtualize the network resources. Virtualized network resources are known as a “network slice”. A slice can span several network elements including the network backbone, routers and hosts.

Computing resources can be virtualized through the use of the Hypervisor which abstracts the hardware and enables several guest OS to run in complete isolation. Similarly when a network element a FlowVisor, experimentally demonstrated, is used along with the OpenFlow Controller it is possible to virtualize the network resources. Hence each traffic flow gets a combination of bandwidth, routers, traffic flows and computing resources. Hence Software Defined Networks (SDNs) are also known as Virtualized Programmable Networks owing to the ability of different traffic flows being able to co-exist in perfect isolation of one another allowing for traffic flows through the resources to be controlled by programs in the Flow Controller.

The ability to manage different types of traffic flows across network resources opens up endless possibilities. SDNs have been successfully demonstrated in wireless handoffs between networks and in running multiple different flows through a common set of resources. SDNs in public and private clouds allow appropriate resources to be pooled during different times of the day based on the geographical location of the requests. Telcos could optimize the usage of their backbone network based on peak and lean traffic periods through the Core Network.

The OpenFlow Protocol has already gained widespread support in the industry and has resulted in the formation of the Open Networking Foundation (ONF). The members of ONF include behemoths like Google, Facebook, Yahoo, and Deutsche Telekom to networking giants like Cisco, Juniper, IBM and Brocade etc. Currently the ONF has around 43 member companies

Software Define Networks is a tectonic shift in the way networks operate and truly represent the dawn of a new networking era. A related post of interest is “Adding the OpenFlow variable in the IMS equation

Find me on Google+