Dissecting the Cloud – Part 2

This post  delves a little more deeply into the cloud. In the last post Dissecting the Cloud –Part 1, I described the analogy of a person partitioning a large house by creating self-contained units through the use of a hypervisor which abstracts the underlying hardware( CPU, storage and NICs) into virtual CPUs, virtual NICs and virtual disks.

Hence there are has several instances on the cloud each with its own CPU, NIC and storage. In fact several tenants can reside on the same cloud with their own individual CPU, NIC and storage. This is known as multi-tenancy.

However multi-tenancy creates a unique set of associated issues similar to that of a multi-tenanted house. For e.g. how does one isolate one tenant from another? How does one charge each tenant? Are the tenants secured from the prying eyes of their neighbors? How can the owner ensure that one  particular tenant does not consume an inordinate amount of water or electricity at the expense of other tenants?

These are typical problems in a multi-tenanted cloud. A common and a high profile issue in the cloud is that of the ‘noisy neighbor’. In this situation one of the instances of the cloud hogs the network bandwidth or the storage tier, resulting in a severe bandwidth crunch or storage access problems for other instances. Here is an interesting article on the noisy neighbor issue “The Problem with noisy neighbors in the cloud”.

It appears that IBM has patented a solution for the bandwidth crunch caused by noisy neighbors: IBM patents ‘noisy neighbor’ problem with SDN.

In order to ensure that multi-tenancy can be realized in the cloud it is essential to isolate the virtual CPUs, network and storage in the cloud

Network isolation: Network isolation is achieved through the use of VPNs (virtual private network), VLANs (Virtual LANS) and subnetting.

A VPN creates a secure tunnel between a user and the cloud instance while accessing the instance from the internet. The data in motion is encrypted using IPSec.  Also vNICs belonging to a client are logically grouped together in a VLAN. Groups of vNICs can be sub-netted together to allow broadcast between then.  VLANs can effectively isolate traffic between itself and other VLANs. A very good write-up of VLANs and sub-netting can be seen at “What is the difference between subnetting and VLAN”.

Storage isolation: Storage in cloud can be made of block storage, SAN or NAS storage. Storage isolation is typically achieved through the hypervisor and zoning. Zoning is the partitioning of a Fibre Channel fabric into smaller subsets to restrict interference, add security, and to simplify management.  While a SAN makes available several devices and/or ports to a single device, each system connected to the SAN should only be allowed access to a controlled subset of these devices/ports.

CPU isolation: The hypervisor does create individual instances all fairly isolated from one another. However this is the area that is receiving more attention than storage or networking isolation because of security concerns and is prone to attack. In fact I was greatly surprised to hear that there is a technique called ‘side channel’ attack by which an intruder by just observing the time that is taken for computations and the temperatures generated can reverse engineer the actual instructions. This is really a scary thought!

This is how multi-tenancy is achieved in clouds. I hope to revisit this topic again in the future.

Find me on Google+

Presentation on Wireless Technologies – Part 2

Here is a continuation of my earlier presentation on Wireless Technologies – Part 1. These presentations trace the evolution of telecom from basic telephony all the way to the advances in LTE.

Find me on Google+

The moving edge of computing

Published in The Hindu – 30 Sep 2012 as “Three computing technologies that will power the world

“The moving edge of computing computes and having computed moves on…” We could thus rephrase the Rubaiyat of Omar Khayyam’s “The moving hand…” Computing technology has really advanced by leaps and bounds. We are now in a new era of computing. We are in the midst of “intelligent and cognitive” computing.

From the initial days of number crunching by languages of FORTRAN, to the procedural methodology of Pascal or C and later the object oriented paradigm of C++ and Java we have now come a long way.  In this age of information overload technologies that can just solve problems through steps & procedures are no longer adequate. We need technology to detect complex patterns, trends, understand nuances in human language and to automatically resolve problems. In this new era of computing the following 3 technologies are furthering the frontiers of computing technology.

Predictive Analytics

By 2016 130 Exabyte’s (130 * 2 ^ 60) will rip through the internet. The number of mobile devices will exceed the human population this year, 2012 and by 2016 the number of connected devices will touch almost 10 billion. The devices connected to the net will range from mobiles, laptops, tablets, sensors and the millions of devices based on the “internet of things”. All these devices will constantly spew data on the internet. A hot and happening trend in computing is the ability to make business and strategic decisions by determining patterns, trends and outliers among mountains of data. Predictive analytics will be a key discipline in our future and experts will be much sought after. Predictive analytics uses statistical methods to mine intelligence, information and patterns in structured, unstructured and streams of data. Predictive analytics will be applied across many domains from banking, insurance, retail, telecom, energy. There are also applications for energy grids, water management, besides determining user sentiment by mining data from social networks etc.

Cognitive Computing

The most famous technological product in the domain of cognitive computing is IBM’s supercomputer Watson. IBM’s Watson is an artificial intelligence computer system capable of answering questions posed in natural language. IBM’s supercomputer Watson is best known for successfully trouncing a national champion in the popular US TV quiz competition, Jeopardy. What makes this victory more astonishing is that IBM’s Watson had to successfully decipher the nuances of natural language and pick the correct answer.  Following the success at Jeopardy, IBM’s Watson supercomputer has now  been employed by a leading medical insurance firm in US to diagnose medical illnesses and to recommend treatment options for patients. Watson will be able to analyze 1 million books, or roughly 200 million pages of information. The other equally well known mobile app is Siri the voice recognition app on the iPhone. The earlier avatar of cognitive computing was expert systems based on Artificial Intelligence. These expert systems were inference engines that were based on knowledge rules. The most famous among the expert systems were “Dendral” and “Mycin”. We appear to be on the cusp of tremendous advancement in cognitive computing based on the success of IBM’s Watson.

Autonomic Computing

This is another computing trend that will become prevalent in the networks of tomorrow. Autonomic computing refers to the self-managing characteristics of a network. Typically it signifies the ability of a network to self-heal in the event of failures or faults. Autonomic network can quickly localize and isolate faults in the network while keeping other parts of the network unaffected. Besides these networks can quickly correct and heal the faulty hardware without human intervention. Autonomic networks are typical in smart grids where a fault can be quickly isolated and the network healed without resulting in a major outage in the electrical grid.

These are truly exciting times in computing as we move towards true intelligence!

Find me on Google+

Stacks of protocol stacks

Communication protocols like any other technology arrive on the scene to solve a particular problem. Some protocols endure while many perish. The last 60 years or so have seen a true proliferation of protocols in various domains.

So what is a protocol?
In my opinion a protocol is any pre-defined set of communication rules.

For e.g. consider the exchange between me and you
Me: “Thank You”
You: “You’re welcome”.

A more complex exchange could be
You: “How are you doing today?”
Me:”Fine. And yourself?”
You: “Great”

These are “protocols of courtesy or decorum”. There are many such protocols in daily use so there is little wonder that the technological world is full of protocols.

A couple of decades back there were 3 main standard bodies that came up with protocols namely IEEE (for LANs), IETF for the internet and ITU-T for telecom. Now there are many more bodies for e.g. CableLabs for cable television, WiMAX forum for WiMAX, NFC Forum etc.

Also protocols exist both for wired and the wireless domain. The protocols differ based on the distance for which the protocol will apply. This post will try to take a look at the some of most important in this. Certainly many will slip through the cracks, so beware!

Near Field Communication (NFC): This is a wireless protocol of the order of a few centimeters primarily for contactless data transfers. Its primary use is for mobile payment. As opposed to Bluetooth there will be no necessity for device pairing. The NFC standards are maintained by the NFC Forum.

Bluetooth: This is another wireless protocol and uses the 2.4- 2.48 GHz band for data exchange and is commonly used in mobile phones, TVs, and other devices. This protocol requires pairing of devices prior to data transfer. The Bluetooth details are maintained in Bluetooth Special Interest Group.

Zigbee:  Zigbee is a low powered, low cost wireless protocol that will connect devices within residential homes. Zigbee has a data rate of 250 kbps and is based on the IEEE 802 standard for Personal Area Network (PAN) or Home Area Network (HAN). Zigbee will be protocol of choice in the Smart Home which will be part of Smart Grid concept. More details can be found at the Zigbee Alliance.

LAN protocols:  LAN protocols are wired protocols. The main 3 LAN protocols are IEEE 802.3 (Ethernet), IEEE 802.4 (Token Bus) & IEEE (Token Ring) are used in enterprises, schools or small buildings of the order of a few 100 meters. LAN protocols ensure transmission speeds of the order of 10 Mbps – 40 Mbps.

WiFi: WiFi provides wireless access in residential homes, airports, cafes at a distance of 20 meters with speeds of 2 Mbps – 8 Mbps (802.11a/b/e/g). Wireless hotspots use WiFi protocols

Super WiFi/Whitespaces: Whitespaces refers to using abandoned TV frequency bands for wireless data transmission around the 700 MHz range. Whitespaces can travel larger distances typically around 100 km and through trees and walls. This is nascent technology and is based on IEEE 802.22 protocol. A new forum for taking this technology forward is the Whitespace Alliance.

Telecom protocols

ISDN:  This protocol is governed by the Q.931 standards and was supposed to carry high speed data (64 kbps???) from residential homes, This protocol went into relative obscurity soon.

Wired Trunk protocols: There are several trunk protocols that connect digital exchanges (digital switches) for e.g. ISUP (Q.763), BTUP, TUP. These protocols exchange messages between central offices and are used for setting up, maintaining and release of STD voice calls.

Internet Protocols

The predominant protocol of the internet is TCP/IP (RFC 793). There are several other protocols that work in the internet. A few of them

Exterior Gateway Protocol (EGP)

OSPF Open Shortest Path First protocol

Interior Gateway Protocol (IGP)

RSVP & DiffServ

WAN protocols: There is a variety of protocols to handle communication between regions or across a large metropolitan area. The most common among these are

MPLS: Multi-protocol Label System.

ATM : Asynchronous Transfer Mode

Frame relay:

X.25:

Protocols that are exist in both the Internet & Telecom domain

A number of protocols work in concert to setup, maintain and release multi-media sessions

SIP/SDP: Session Initiation Protocol (RFC 3261 et al) /Session Description Protocol (RFC 2327)

SCTP/RTP/RTSP: Session Control Transport Protocol/Real Time Protocol/Real Time Secure Protocol – These protocols are used to send and control media packets.

MGCP/Megaco: This is a protocol used to control the Softswitch.or the Media Gateway Controller (MGC)

WiMAX: (Worldwide Interoperability for Microwave Access) is a technology for wirelessly delivering high-speed Internet service to large geographical areas. WiMAX offers data speeds in the range of 40 Mbps – 70 Mbps. This is an IEEE 802.16 family of protocols. Details about WiMAX can be obtained at WiMAX Forum.

DOCSIS: DOCSIS is the protocol that is used in cable TV and uses hybrid fiber co-axial cables for transmission. This protocol is also used these days for internet access. More details regarding the DOCSIS protocol can be found at CableLabs.

Note: I will be adding more substance and body to this post soon …

Find me on Google+

Software Defined Networks (SDNs): A glimpse of tomorrow

Published in Telecom Asia, Jul 28,2011 – A glimpse into the future of networking

Published in Telecoms Europe, Jul 28 2011 – SDNs are new era for networking

Networks and networking, as we know it, is on the verge of a momentous change, thanks to a path breaking technological concept known as Software Defined Networks (SDN). SDN is the result of pioneering effort by Stanford University and University of California, Berkeley and is based on the Open Flow Protocol and represents a paradigm shift to the way networking elements operate.

Networks and network elements, of today, have been largely closed and have been based on proprietary architectures. In today’s network and switching and routing of data packets happen in the same network elements for e.g. the router.

Software Defined Networks (SDN) decouples the routing and switching of the data flows and moves the control of the flow to a separate network element namely, the flow controller.   The motivation for this is that the flow of data packets through the network can be controlled in a programmatic manner. A Flow Controller can be typically implemented in a standard PC.  In some ways this is reminiscent of Intelligent Networks and Intelligent Network Protocol which delinked the service logic from the switching and moved it a network element known as the Service Control Point.

The OpenFlow Protocol has 3 components to it. The Flow Controller that controls the flows, the OpenFlow switch and the Flow Table and a secure connection between the Flow Controller and the OpenFlow switch. The OpenFlow Protocol is an open source API specification for modifying the flow table that exists in all routers, Ethernet switches and hubs.  The ability to securely control the flow of traffic programmatically opens ups amazing possibilities.

OpenFlow Specification

Alternatively, existing branded routers can implement the OpenFlow Protocol as an added feature to their existing routers and Ethernet switches. This will enable these routers and Ethernet switches to support both production traffic and research based traffic using the same set of network resources.

The single greatest advantage of separating the control and data plane of network routers and Ethernet switches is the ability to modify and control different traffic flows through a set of network resources. In addition to this benefit Software Define Networks (SDNs) also include the ability to virtualize the network resources. Virtualized network resources are known as a “network slice”. A slice can span several network elements including the network backbone, routers and hosts.

Computing resources can be virtualized through the use of the Hypervisor which abstracts the hardware and enables several guest OS to run in complete isolation. Similarly when a network element a FlowVisor, experimentally demonstrated, is used along with the OpenFlow Controller it is possible to virtualize the network resources. Hence each traffic flow gets a combination of bandwidth, routers, traffic flows and computing resources. Hence Software Defined Networks (SDNs) are also known as Virtualized Programmable Networks owing to the ability of different traffic flows being able to co-exist in perfect isolation of one another allowing for traffic flows through the resources to be controlled by programs in the Flow Controller.

The ability to manage different types of traffic flows across network resources opens up endless possibilities. SDNs have been successfully demonstrated in wireless handoffs between networks and in running multiple different flows through a common set of resources. SDNs in public and private clouds allow appropriate resources to be pooled during different times of the day based on the geographical location of the requests. Telcos could optimize the usage of their backbone network based on peak and lean traffic periods through the Core Network.

The OpenFlow Protocol has already gained widespread support in the industry and has resulted in the formation of the Open Networking Foundation (ONF). The members of ONF include behemoths like Google, Facebook, Yahoo, and Deutsche Telekom to networking giants like Cisco, Juniper, IBM and Brocade etc. Currently the ONF has around 43 member companies

Software Define Networks is a tectonic shift in the way networks operate and truly represent the dawn of a new networking era. A related post of interest is “Adding the OpenFlow variable in the IMS equation

Find me on Google+