A method for optimal bandwidth usage by auctioning available bandwidth using the OpenFlow protocol

Here is a recent idea of mine that has made it to IP.com (Intellectual Property.com). See link

A method for optimal bandwidth usage by auctioning available bandwidth using the OpenFlow protocol.  Here is the full article from IP.com

In this article I provide some more details to my earlier post – Towards an auction-based internet.

Abstract:
As the data that traverses the internet continues to explode exponentially the issue of a huge bandwidth crunch will be a distinct possibility in the not too distant future. This invention describes a novel technique for auctioning the available bandwidth to users based on bid price, quality of service expected and the type of traffic. The method suggested in this invention is to use the OpenFlow protocol to dynamically allocate bandwidth to users for different flows over a virtualized network infrastructure.

Introduction:
Powerful smartphones, bandwidth-hungry applications, content-rich applications, and increasing user awareness, have together resulted in a virtual explosion of mobile broadband and data usage. There are 2 key drivers behind this phenomenal growth in mobile data. One is the explosion of devices viz. smartphones, tablet PCs, e-readers, laptops with wireless access. The second is video. Over 30% of overall mobile data traffic is video streaming, which is extremely bandwidth hungry. Besides these, new technologies like the “Internet of Things” & “Smart Grids” now have millions and millions of sensors and actuators connected to the internet and contending for scarce bandwidth. In other words there is an enormous data overload happening in the networks of today.
Two key issues of today’s computing infrastructure deal with data latency and the economics of data transfer. Jim Gray (Turing award in 1998) in his paper on “Distributed Computing Economics” tells us that the economics of today’s computing depends on four factors namely computation, networking, database storage and database access. He then equates $1 as follows
One dollar equates to
= 1 $
≈ 1 GB sent over the WAN
≈ 10 Tops (tera CPU operations)
≈ 8 hours of CPU time
≈ 1 GB disk space
≈ 10 M database accesses
≈ 10 TB of disk bandwidth
≈ 10 TB of LAN bandwidth
As can be seen from above breakup, there is a disproportionate contribution by the WAN bandwidth in comparison to the others. In others words while the processing power of CPUs and the storage capacities have multiplied accompanied by dropping prices, the cost of bandwidth has been high. Moreover the available bandwidth is insufficient to handle the explosion of data traffic.
It is claimed that the “cheapest and fastest way to move a Terabyte cross country is sneakernet (i.e. the transfer of electronic information, especially computer files, by physically carrying removable media such as magnetic tape, compact discs, DVDs, USB flash drives, or external drives from one computer to another).
While there has been a tremendous advancement in CPU processing power (CPU horsepower in the range of petaflops) and enormous increases in storage capacity(of the order of petabytes) coupled with dropping prices, there has been no corresponding drop in bandwidth prices in relation to the bandwidth capacity.
It is in this context an auction-based internet makes eminent sense. An auction-based internet would be a business model in which bandwidth would be allocated to different data traffic on the internet based on dynamic bidding by different network elements. Such an approach becomes imperative while considering the economics and latencies involved in data transfer and the emergence of the promising technology known as the OpenFlow protocol. This is further elaborated below

Description

As mentioned in Jim Turing’s paper a key issue that we are going to face in the future has to do with the economics of data transfer and the associated WAN latencies
As can be seen there are 3 distinct issues with the current state of technology
1) There is an exponential increase in data traffic circling the internet. According to a Cisco report the projected increase in data traffic between 2014 and 2015 is of the order of 200 exabytes (10^18)).The internet is thus clogged due to the many bandwidth hungry applications and millions of devices that make the internet
2) WAN latencies and the economics of data transfers are two key issues of the net
3) Service Providers have not found a good way to monetize this data explosion.
Clearly bandwidth is a resource that needs to be utilized judiciously given that there are several contenders for the usage of bandwidth.
Detailed description: This invention suggests a scheme by which internet bandwidth can be auctioned between users based on their bid price, Quality of Service (QoS) required and the type of traffic (video, voice, data, streaming). The energy utility already auctions electricity to the highest bidder. This invention suggests a similar approach to auction scarce bandwidth to competing bidders.
The internet pipes get crowded at different periods of the day, during seasons and during popular sporting events. This invention suggests the need for an intelligent network to price data transfer rates differently depending on the time of the day, the type of traffic and the quality of service required. In this scheme of things the internet will be based on an auction mechanism in which different devices bid for scarce bandwidth based on the urgency, speed and quality of services required.
Such a network can be realized today provided the network and the network elements that constitute the internet implement the OpenFlow protocol.
Software Defined Networks (SDNs) is the new, path breaking innovation in which network traffic can be controlled programmatically through the use of the OpenFlow protocol. SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate.

SDNs can be made to dynamically route traffic flows based on decisions in real time. The flow of data packets through the network can be controlled in a programmatic manner through the OpenFlow protocol. In order to dynamically allocate smaller or fatter pipes for different flows, it necessary for the logic in the Flow Controller to be updated dynamically based on the bid price, QoS parameters and the traffic type.

The OpenFlow protocol has a Flow Controller element which can be made to create different flows by manipulating the flow tables of the different network elements. Hence the Flow Controller depending on the bid price, the bandwidth rate and the QoS will auction the different bids and create different flows for different users. The Flow Controller will then update the flow tables of the network elements that will participate to realize this end-to-end flow of traffic for different users.

A typical scenario can be visualized as below

pat1

In the above figure different users bid for available bandwidth. For e.g. User A could bid for A Mbps @ $a/bit for traffic type A, User B could bid for B Mbps @ $b/bit for traffic type B and User C could bid for C Mbps @ $c/bit for traffic type C. The different QoS parameters like delay, throughput, and jitter are all sent in the user requests. The Flow controller receives all these bids with associated parameters and auctions the available bandwidth against the bid prices that the network elements bid for. The Flow Controller then ranks the bids against the most optimal bandwidth allocation that has the highest return.

The Flow Controller can then allocate different bandwidths to the different users based on the bids from the highest to the lowest, quality of service and the type of traffic. Software Defined Networks (SDNs) can then create different flows for across the networks. SDN can create different slices of network elements from end-to–end for each of the different flow requirements.

The Flow Controller can then create these flows and update the flow tables of the network elements based on the allotted speeds for the bid price.

This is shown diagrammatically below

pat2

 

For e.g. we could assume that a corporate has 3 different flows namely, Immediate, ASAP (As soon as possible) and  price below $x. Based on the upper ceiling for the bid price, the OpenFlow controller will allocate a flow for the immediate traffic of the corporation. For the ASAP flow, the corporate would have requested that the flow be arranged when the bid price falls between a range $a – $b. The OpenFlow Controller will ensure that it can arrange for such a flow. The last type of traffic will be allotted a default flow during non-peak hours. This will require that the OpenFlow controller be able to allocate different flows dynamically based on winning the auction process that happens in this scheme.

Using the OpenFlow paradigm to auction bandwidth

These will be the typical steps that will occur during

  1. Let us assume that it is the period of the day when the usage is at its peak
  2. Let there be 3 users User A, User B and User C who would like to video-conference, video stream and make a voice call respectively
  3. Depending on the urgency and the price that the users can afford these 3 users will bid for a slice of a bandwidth to complete their call
  4. Let user A request A Mbps @ $a/bit for QoS parameters p(a). Let user B request B Mbps @ $b/bit for QoS parameters p(b) and user C request C Mbps @ $c/bit for QoS parameters p(c).
  5. When the Flow Controller receives these requests, based on the available bandwidth at its disposal (assuming it has already used X Mbps for already existing flows) it will normalize these requests and auction them so that it results in the highest bid winning its requested bandwidth slice followed by the ones lower than it. If a user does not qualify the auction the user will have a bid at a later time according to some algorithm. Let us assume that user A and user C win their bids
  6. The Flow Controller will now algorithmically decide the contents of the flow tables of the intervening network elements and will accordingly populate these flow tables
  7. The flows for User A and User C are now in progress.
  8. The Flow Controller will accept bids whenever there is spare bandwidth that can be put up for auction.

As can be seen such a mechanism will result in a varying price for bandwidth with the highest value during peak periods and lower values during off-peak periods.

Benefits: The current protocols of the internet of today namely IntServ, DiffServ allocate pipes based on the traffic type & class which is static once allocated. This strategy enables OpenFlow to dynamically adjust the traffic flows based on the current bid price prevailing in that part of the network. Moreover the usage of OpenFlow protocol can generate a lot more granualar flow types.

The ability of the OpenFlow protocol to be able to dynamically allocate different flows will once and for all solve the problem of being able to monetize mobile and fixed line data This will be a win-win for both the Service Providers and the consumer. The Service Provider will be able to get a ROI for the infrastructure based on the traffic flowing through his network. Users can decide the type of service they are interested and choose appropriately. The consumer rather than paying a fixed access charge could have a smaller charge because of low bandwidth usage.

Conclusion: An auction-based internet is a worthwhile business model to pursue. The ability to route traffic dynamically based on an auction mechanism in the internet enables the internet infrastructure to be utilized optimally. It will serve the dual purpose of solving traffic congestion, as highest bidders will get the pipe but will also monetize data traffic based on its importance to the end user.

Find me on Google+

Towards an auction-based Internet

The post below was quoted and discussed extensively in (see the link) GigaOM, 14 Jan 2011 – Software Defined Networks could create an auction-based bazaar.

Published in Telecom Asia, Jan 13,2012 – Towards an auction-based internet

Are we headed to an auction-based Internet? This train of thought (no pun intended), which struck me while I was travelling from Chennai to Bangalorelast evening, was the result of the synthesis  of different ideas and technologies which I had read  in the recent past.

The current state of technology and the technology trends do seem to indicate such a possibility.  An auction-based internet would be a business model in which bandwidth would be allocated to different data traffic on the internet based on dynamic bidding by different network elements. Such an eventuality is a distinct possibility considering the economics and latencies involved in data transfer, the evolution of the smart grid concept and the emergence of the promising technology known as the OpenFlow protocol.  This is further elaborated below

Firstly, in the book “Grids, cloud and virtualization”, by Massimo Caforo and Giovanni Aloisio, the authors highlight a typical problem of the computing infrastructure of today. In the book, the authors contend that a key issue in large scale computing is data affinity, which is the result of the dual issues of data latency and the economics of data transfer. They quote, Jim Gray (Turing award in 1998) whose paper on “Distributed Computing Economics” states that that programs need to be migrated to the data on which they operate rather than transferring large amounts of data to the programs.  This is in fact used in the Hadoop paradigm, where the principle of locality is maintained by keeping the programs close to the data on which they operate.

The book highlights another interesting fact. It says “cheapest and fastest way to move a Terabyte cross country is sneakernet (i.e. the transfer of electronic information, especially computer files, by physically carrying removable media such as magnetic tape, compact discs, DVDs, USB flash drives, or external drives from one computer to another). Google used sneakernet to transfer 120 TB of data. The SETI@home also used sneakernet to transfer data recorded by their telescopes inArecibo, Puerto Rico stored in magnetic tapes toBerkeley,California.

It is now a well known fact that mobile and fixed line data has virtually exploded clogging the internet. YouTube, video downloads and other streaming data choke the data pipes of the internet and Service Providers have not found a good way to monetize this data explosion. While there has been a tremendous advancement in CPU processing power (CPU horsepower in the range of petaflops) and enormous increases in storage capacity(of the order of petabytes) coupled with dropping prices,  there has been no corresponding drop in bandwidth prices in relation to the bandwidth capacity.

Secondly, in the book “Hot, flat and crowded” Thomas L. Friedman  describes the “Smart Homes” of the future in which all the home appliances will have sensors and will participate in the energy auction in real time as a part of the Smart Grid.  The price of energy in the Energy Grid fluctuates like stock prices since enterprises are bidding for energy during the day. In his Smart Home, Friedman envisions a situation in which the washing machine will turn on during off-peak hours when the prices of energy in the energy grid is low. In this way all the appliances in the homes of the future will minimize energy consumption by adjusting the cycles accordingly.

Why could not the internet also behave in a similar fashion? The internet pipes get crowded at different periods of the day, during seasons and during popular sporting events. Why cannot we have an intelligent network in place in which price of different data transfer rates vary depending on the time of the day, the type of traffic and the quality of service required.  Could the internet be based on an auction-mechanism in which different devices bid for bandwidth based on the urgency, speed and quality of services required? Is this possible with the routers, switches of today?

The answer is yes. This can be achieved by the new, path breaking innovation known as Software Defined Networks (SDNs) based on the OpenFlow protocol. SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate.  Do read my post Software Defined Networks : A glimpse of tomorrow   for a more detailed look at SDNs. SDNs can be made to dynamically route traffic flows based on decisions in real time.  The flow of data packets through the network can be controlled in a programmatic manner through the OpenFlow protocol. In order to dynamically allocate smaller or fatter pipes for different flows, it necessary for the logic in the Flow Controller to be updated dynamically based on the bid price.

For e.g. we could assume that a corporate has 3 different flows namely, immediate, (ASAP), price below $x. Based on the upper ceiling for the bid price, the OpenFlow controller will allocate a flow for the immediate traffic of the corporation. For the ASAP flow, the corporate would have requested that the flow be arranged when the bid price falls between a range $a – $b. The OpenFlow Controller will ensure that it can arrange for such a flow. The last type of traffic which is not important it will be send during non-peak hours. This will require that the OpenFlow controller be able to allocate different flows dynamically based on winning the auction process that happens in this scheme. The current protocols of the internet of today namely RSVP, DiffServ allocate pipes based on the traffic type & class which is static once allocated. This strategy enables OpenFlow to dynamically adjust the traffic flows based on the current bid price prevailing in that part of the network.

The ability of the OpenFlow protocol to be able to dynamically allocate different flows will once and for all solve the problem of being able to monetize mobile and fixed line data.  Users can decide the type of service they are interested and choose appropriately. This will be a win-win for both the Service Providers and the consumer. The Service Provider will be able to get a ROI for the infrastructure based on the traffic flowing through his network. The consumer rather than paying a fixed access charge could have a smaller charge because of low bandwidth usage.

An auction-based internet is not just a possibility but would also be a worthwhile business model to pursue. The ability to route traffic dynamically based on an auction mechanism in the internet enables the internet infrastructure to be utilized optimally. It will serve the dual purpose of solving traffic congestion, as highest bidders will get the pipe but will also monetize data traffic based on its importance to the end user.

An auction based internet is a very distinct possibility in our future given the promise of the OpenFlow protocol.

All  thoughts, ideas or counter opinions are welcome!

Find me on Google+