A Roundup of Web Technologies

The internet and the World Wide Web are woven into our daily lives so intricately that life without them is  unimaginable. We use the web for our daily news, to finding directions(maps), socializing(Facebook), sending/receiving emails, and buying e-tickets and books over e-retail stores on the net. With a click, a drag and drop or by just moving the mouse over a web page we see results instantaneously. But what are the technologies that power the Web outside of the routers and hubs of the data communication world?

Actually if one peeks into the technologies that power Web 2.0 one would be amazed at the bewildering array of technological choices that one is confronted with. My curiosity was whetted when I found that there were so many possibilities that go behind different websites from Gmail, http://www.amazon.com. Twitter, Facebook or maps.yahoo.com.

This article tries to give a bird’s eye view of the different technologies at the different layers. In many ways this article will be more of name dropping of the technologies rather than doing any real justice to each individual piece. I am merely presenting the different technologies as an interested spectator rather than as a web expert.

Presentation Layer: This is the layer which presents the web page to user. In the presentation layer most of the pages are made of elements of from HTML,CSS, PHP, Javascript, AJAX. These are diferent scripting mechanisms to display or take input from the user. Subsequently there arose the need for technologies called Rich Internet Application (RIA) to provide a much more superior user experience. These technologies are used to display video content and animations. Hence, we have Flash, Flex to more sophisticated technologies like Liferay, Primefaces, Myfaces and Java Server Faces (JSF) to the current HTML5. These technologies allow for drag-and-drop functionality, incorporating videos and animations in the web pages making the user experience similar to what he experiences on the desktop.

Enterprise Layer: At this layer the user input is processed and the client makes necessary requests to the back end server to get the appropriate results. This layer also there is a virtual explosion of technologies that make this possible. In this layer from the earlier C++, Java programs the movement was towards Enterprise Java Beans (EJB) invoked through servlets or Java Server Pages. To make the life of the web developer easier (?) there are several web frameworks that automate some of the common tasks of the developer. Some of them are Django with Python, Ruby on Rails (RoR), Groovy Grails, Perl-Catalyst, Python-Flask and so on. Each web framework has it pros and cons and has different learning curves. While Python developers thrive on “there is only one way to do a thing”, die-hard Ruby developers believe in the “do not repeat yourself (DRY)” philosophy. So the technology choice will be a matter of taste combined with deadlines for the project.

Persistence Layer: At the persistence layer there is Hibernate which converts a relational model to an object model and vice-versa making it easy to manipulate the rows and columns of tables. Usually this layer is coupled with Spring frameworks. Another competing technology is Struts framework.

Database Layer: While Hibernate can be used as a persistence layer it is also possible to access the database through ODBC, JDBC etc.

Exchange of Data: In the earlier days sending and receiving data or invoking remote procedure calls were through CORBA or RPC (Remote Procedure Calls). Subsequently other methods have been implemented for data exchange between servers. They are XML, JSON (Javascript Object Notation),SOAP (Simple Object Access Protocol) to the more current REST (Representational State Transfer)

Hence there are plethora of choices to make prior in the design of web sites complete with back end processing. The choices that are made will depend on the look and feel of the web site coupled with the ease of implementation of the site given the project deadlines.

Find me on Google+

The Anatomy of Latency

Latency is a measure of the time delay experienced in a system. In data communications, latency would be measured as the round-trip delay between sending a packet and receiving response from the destination. In the world of web applications latency is the response time of a web site. In web applications latency is dependent on both the round trip time on the communication link and also the processing time of the application, Hence we could say that

latency = 2 * round trip time  + Processing time

The round trip time is probably less susceptible to increasing traffic than the processing time taken for handling the increased loads. The processing time of the application is particularly pernicious in that it susceptible to changing traffic. This article tries to analyze why the latency or response times of web applications typically increase with increasing traffic. While the latency increases exponentially as the traffic increases the throughput increases to a point and then finally starts to drop substantially.  The ideal situation for all internet applications is to have the ability to scale horizontally allowing the application to handle increasing traffic by simply adding more commodity servers to the application while maintaining the response times to acceptable limits. However in the real world this never happens.

The price of Latency

Latency hurts business. Amazon found out that every 100 ms of latency cost them 1% of sales.  Similarly Google realized that a 0.5 second increase in search results dropped the search traffic by 20%. Latency really matters.    Reactions to bad response times in web sites range from minor annoyance to complete frustration and loss of users and business.

The cause of processing latency

One of the fundamental requirements of scalable systems is that they should be loosely coupled. The application needs to have a modular architecture with well defined interfaces with the other modules.  Ideally, applications which have been designed with fairly efficient processing times of the order of O(logn) or O(nlogn)  will be immune to changing loads but will be impacted by changes in number of data elements  So the algorithms adopted by the applications themselves do not contribute the increasing response times for increase traffic. So finally what really is the performance bottleneck for increasing latencies and decreasing throughput for increased loads?

Contention- the culprit

One of the culprits behind the deteriorating response is the thread locking and resource contention. Assuming that application has been designed with Reader-Writer locks or message queue based synchronization mechanism then the time spent in waiting for resources to become free, while traffic increases, will result in the degraded performance.

Let us assume that the application is read-heavy, write-light and has implemented Reader-Writer synchronization mechanism. Further let us assume that a write-thread locks a resource for 250 ms.  At low loads we could have 4 such threads each locking the resource for 250 ms for a total span of 1s.  Hence in 1s there can be a maximum of 4 threads each of which has executed a write lock for 250 ms for a total of 1s. In this interval all reader threads will be forced to wait. When the traffic load is low the number of reader threads waiting for the lock to be released will be low and will not have much impact but as the traffic increases the number of threads that are waiting for the lock to be released will be increase. Since a write lock takes a finite amount of time to complete processing we cannot go over the 4 write threads in 1 second with the given CPU speed.

However as the traffic further increases the number of waiting threads not only increases but also consume CPU and memory. Now this adversely impacts the writer threads which find that they have lesser CPU cycles and less memory and hence take longer times to complete. This downward cycle worsens and hence results in an increase in the response time and a worsening throughput in the application.

The solution to this problem is not easy. We need to revisit the areas where the application blocks waiting for something. Locking besides causing threads to wait also adds the overhead of getting scheduled prior to being able to execute again. We need to minimize the time a thread holds a resource before allowing others threads access to it.

Find me on Google+

Designing for Cloud Worthiness

Cloud Computing is changing the rules of computing to the enterprise. Enterprises are no longer constrained by capital costs of upfront equipment purchase. Rather they can concentrate on the application and deploy it on the cloud and pay in a utility style based on usage. Cloud computing essentially presents a virtualized platform on which applications can be deployed.

The Cloud exhibits the property of elasticity by automatically adding more resources to the application as demand grows and shrinking the resources when the demand drops. It is this property of elasticity of the cloud and the ability to pay based on actual usage that makes Cloud Computing so alluring.

However to take full advantage of the Cloud the application must use the available cloud resources judiciously.  It is important for applications that are to be deployed on the cloud to have the property of scaling horizontally.  What this implies is that the application should be able to handle more transactions per second when more resources are added to application. For example if the application has been designed to run in a small CPU instance of 1.7GHz,32 bit and 160 GB of instance storage with a throughput of 800 transactions per second then one should be able to add 4 such instances and scale to handling 4000 transactions per second.

However there is a catch in this. How does one determine what should be theoretical limit of transactions per second for a single instance?  Ideally we should maximize the throughput and minimize the latency for each instance prior to going to the next step of adding more instances on the cloud. One should squeeze the maximum performance from the application in the instance of choice prior to using multiple instances on the cloud. Typical applications perform reasonably well under small loads but as the traffic is increased the response time increases and the throughput also starts dipping.

There is a need to run some profiling tools and remove bottlenecks in the application. The standard refrain for applications to be deployed on the cloud is that they should be loosely coupled and also be stateless. However, most applications tend to be multi-threaded with resource sharing in various modules.  The performance of the application because of locks and semaphores should be given due consideration. Typically a lot of time wasted in the wait state of threads in the application. A suitable technique should be used for providing concurrency among threads. The application should be analyzed whether it read-heavy and write-light or write-heavy and read-light. Suitable synchronization techniques like reader-Writer, message queue based exclusion or monitors should be used.

I have found callgrind for profiling and gathering performance characteristics along with KCachegrind for providing a graphical display of performance times extremely useful.

Another important technique to improve performance is the need to maintain in-memory cache of frequently accessed data. Rather than making frequent queries to the database periodic updates from the database need to be made and stored in in-memory cache. However while this technique works fine with a single instance the question of how to handle in-memory caches for multiple instances in the cloud represents quite a challenge. In the cloud when there are multiple instances there is a need for a distributed cache which is  shared among multiple instances. Memcached is appropriate technique for maintaining a distributed cache in the cloud.

Once the application has been ironed out for maximum performance the application can be deployed on the cloud and stress tested for peak loads.

Some good tools that can be used for generating loads on the application are loadUI and multi-mechanize. Personally I prefer multi-mechanize as it uses test scripts that are based on Python which can be easily modified for the testing. One can simulate browser functionality to some extent with Python in multi-mechanize which can prove useful.

Hence while the cloud provides CPUs, memory and database resources on demand the enterprise needs to design applications such that the use of these resources are done judiciously. Otherwise the enterprise will not be able to reap the benefits of utility computing if it deploys inefficient applications that hog a lot of resources without appropriate revenue generating performance.

INWARDi Technologies

Technology Trends – 2011 and beyond

There are lots of exciting things happening in the technological landscape. Innovation and development in every age is dependent on a set of key driving factors namely – the need for better, faster and cheaper, the need to handle disruptive technologies, the need to keep costs down and the need to absorb path breaking innovations. Given all these factors and the current trends in the industry the following technologies will enter mainstream in the years to come.

Long Term Evolution (LTE): LTE, also known as 4G technologies, has been born out of the disruptive entry of data hungry smart phones and tablet PCs. Besides, the need for better and faster applications has been the key driver of this technology. LTE is a data only technology that allows mobile users to access the internet on the move.  LTE uses OFDM technology for sending and receiving data from user devices and also uses MIMO (multiple-in, multiple out). LTE is more economical, and spectrally efficient when compared to earlier 3.5G technologies like HSDPA, HSUPA and HSPA. LTE promises a better Quality of Experience (QoE) for end users.

IP Multimedia Systems IMS): IMS has been around for a while. However with the many advances in IP technology and the transport of media the time is now ripe for this technology to take wings and soar high. IMS uses the ubiquitous internet protocol for its core network both for media transport and for SIP signaling. Many innovative applications are possible with IMS including high definition video conferencing, multi-player interactive games, white boarding etc.

All senior management personnel of organizations are constantly faced with the need to keep costs down. The next two technologies hold a lot of promise in reducing costs for organizations and will surely play a key role in the years to come.

Cloud Computing: Cloud Computing obviates the need for upfront capital and infrastructure costs of organizations. Enterprises can deploy their applications on a public cloud which provides virtually infinite computing capacity in the hands of organizations. Organizations only pay as much as they use akin to utilities like electricity or water

Analytics: These days’ organizations are faced with a virtual deluge of data from their day to day operations. Whether the organizations belong to retail, health, finance, telecom, or transportation there is a lot of data that is generated. Data by itself is useless. This is where data analytics plays an important role. Predictive analytics help in classifying data, determining key trends and identifying correlations between data. This helps organizations in making strategic business decisions.

The following two technologies listed below are really path breaking and their applications are limitless.

Internet of Things: This technology envisages either passive or intelligent devices connected to the internet with a database at the back end for processing the data collected from these intelligent devices. This is also known as M2M (machine to machine) technology. The applications range from monitoring the structural integrity of bridges to implantable devices monitoring fatal heart diseases of patients.

Semantic Web (Web 3.0): This is the next stage in the evolution of the World Wide Web. The Web is now a vast repository of ideas, thoughts, blogs, observations etc. This technology envisages intelligent agents that can analyze the information in the web. These agents will determine the relations between information and make intelligent inferences. This technology will have to use artificial intelligence techniques, data mining and cloud computing to plumb the depths of the web

Conclusion: Creativity and innovation has been the hallmark of mankind from time immemorial. With the demand for smarter, cheaper and better the above technologies are bound to endure in the years to come.

Find me on Google+

Learning to innovate

Published in Associated Content

Mankind’s progress is measured by the depth of creativity and the number of innovations in every period. The human race has an irresistible urge to make things better, faster, cheaper. While modern technological marvels like the airplane, the laptop or an LCD TV continue to amaze us, people in these industries will most probably concur that most of the developments that have happened in these domains have been in small increments over long periods of time. Barring a few breakthrough discoveries like calculus by Isaac Newton or the Theory of General Relativity by Albert Einstein most of the innovations that have happened have been incremental and are the result of careful analysis, sudden insight after several days of thoughtful deliberation, or sound judgment.

Creativity need not be sensational nor even path breaking. Creative ideas can just be incremental. Not all ideas happen in the spur of the moment or are serendipitous.

Innovation and creativity can be deliberate and well thought-out. There is neither a silver bullet for innovation nor a magic potion to inspire creativity. However there are certain eternal principles of innovation that keep repeating time and time again.  Almost all innovations are based on the constant need for simplicity, reliability, being disaster proof, extendibility or safety. “Invention is the child of Necessity” and innovations happen when there is strong need for it. This article explores some of the eternal principles behind the various innovations of the ages.  It looks at the motivation and the thought processes behind those inventions. Various common day-to-day gadgets, processes or devices are explored to determine the principles behind the particular improvement or advancement.

Some of the principles behind innovations are looked at below in greater detail

  1. Simplicity

Several innovations are based on the principle of simplicity. Smaller, simpler, faster has been the driver of several innovations. .For e.g. RISC (Reduced Instruction Set Computer) design strategy is based on the insight that simplified instructions can provide higher performance. The philosophy behind this is that simplicity enables much faster execution of each instruction as opposed to CISC (Complex Instruction Set Computer). In favor of simplicity, are regular English alphabet text based protocols like HTTP, SIP as opposed to the more esoteric telecommunication or data communication protocols which send and receive messages in binary 1’s & 0’s and have complicated construction rules. The tradeoff for simplicity, however, is the consequent increase in bandwidth or the increase in the communication pipe capacity. To counter the increase in bandwidth there are several innovative techniques to compress the text message. These compression algorithms compress the text messages before sending them on the communication pipe. These compressed messages, on reception are uncompressed to obtain the original message.

  1. Safety

Some incremental changes result from the need to increase safety. It is based on the principle of disaster-proofing the devices or to provide safety to us human beings and protect us from freak accidents. For e.g. the safety pin is an incremental improvement over the regular pin. The electrical fuse and the safety valve in a pressure cooker are two such innovative improvements that are based on the principle of safety. The electrical fuse is an interesting innovation and is based on the principle that when there is an abnormal surge of current in the fuse wire, the resulting heat produced will melt the wire, thus breaking the electrical connection. This protects the gadget and the more expensive internal circuits from serious damage. Also the safety valve in the pressure cooker will give way if the pressure inside the cooker goes beyond safety limits. This protects the cooker from bursting due to excess pressure and causing damage to humans and property.

  1. Judgment:

Some innovations are based on the principle of sound engineering judgment and to large extent common sense. One such innovation is the LRU technique of an Operating System like Windows or UNIX. This technique or algorithm helps the computer in deciding which specific “page” or section of a large program in the computer’s Random Access Memory (RAM) should be moved out or swapped to the disk (only a limited number of pages can be in the RAM at a time). The computer makes this decision based on the “Least recently Used” (LRU) technique. Obviously, amongst many pages in the computer’s RAM, the page that has been used the least, in the recent past, can be moved out to the disk under the assumption that it less likely to be used in the future.

Another wonderful innovation, based on sound engineering judgment, is Huffman’s method of coding.  This technique is used in computer communication for the most efficient method of encoding the English text. It is based on a simple rule that more frequently occurring letters in the text should be assigned a smaller number of binary digits  (0 or 1) or bits and less frequently occurring letters be assigned a larger number of binary digits. Since the letter ‘e’ occurs most frequently it is assigned the least number of binary digits and the letter ‘z’ number of binary digits. The resulting binary string thus obtained is most efficient and optimal. This technique clearly shows the ingenuity in the algorithm.

  1. Paradigm Shift

The move from vacuum-tube based electronics for e.g. the diode, triode to the more compact, smaller & less power consuming semi-conductor devices was a major paradigm shift in the realm of electronics. So also the move from cassette tapes to compact discs (CDs) and from VHS video tapes to the now compact DVDs were game changing innovations. Similarly the move from film-based cameras to the now digital cameras is a huge paradigm shift. Innovations which are based on a paradigm different from an existing philosophy truly require an out-of-box or a lateral thinking coupled with great perceptiveness and knowledge of the field.

  1. Logical induction/deduction

In the mid 19th century many great and powerful inventions/ discoveries were made by giants like Faraday, Lenz, Maxwell, and Fleming in the field of Electro Magnetism. One the finding was that, there was a production of a voltage, and hence a current, across a conductor situated in a changing magnetic field. This consequently led to the invention of the Electro-mechanical Generator for generating electricity. Conversely, it was found that if a current carrying conductor is located in an external magnetic field perpendicular to the conductor, the conductor experiences a force perpendicular to itself and to the external magnetic field. This principle, which results in a force on the conductor, promptly led to the invention of the still prevalent Electro-mechanical Motor.

  1. Cause-Effect –Cause

There are many inventions which are based on the cause-effect-cause principle. In this category of innovations the cause & effect are interchanged in different situations to handle different problems. For e.g. thermal energy may be converted to electrical energy as in the case of a steam turbine. Or conversely electrical energy may be converted to heat for e.g. in the electric cooker. Similarly there are inventions, where light energy and electrical energy are transformed based on the specific need.   A recent investigation into the possibility of a remote power outlet led the author on an interesting journey. Since electrical energy can be converted to microwaves (microwave cooker) the author wondered whether the reverse is possible i.e. could microwaves be transmitted across space and at the receiving end be converted to electricity. An internet search showed that this is very much possible and has been thought of more than a decade back. This is known as Microwave Power Transmission (MPT). However practical applications of this are not possible because of the radiation hazards of microwaves. However, MPT is used in outer space.

  1. Feedback principle

An excellent example of this is the room Air Conditioner (A/C). The room air-conditioner maintains a constant temperature. If there is an increase in room temperature it increases the cooling and if there is a drop in temperature it decreases the rate of cooling. There are many day to day inventions that are based on the principle of feedback where the result is fed back and manipulated internally such that resultant output is maintained at a constant level.

  1. Insight & Ingenuity

Vacuum tubes and the semi-conductor transistor can be made to toggle between 2 distinct states (on & off). This property and the insight and knowledge of  binary arithmetic soon led to utilization of electronic devices for binary arithmetic. This soon led to the development of electronic circuits for simple addition, subtraction, and multiplication etc of binary numbers. Invention after invention and innovation after innovation led to development and metamorphosis of the now ubiquitous Personal Computer.

Conclusion

In conclusion innovations can be incremental. There are numerous examples of human cleverness and innovation all around us. One just needs to notice and pretty soon we begin to appreciate human ingenuity. We need to reflect on the thought processes behind each incremental innovation which have made our lives more pleasant and convenient. Hopefully, as we become more aware, someday, some of us may have a defining “eureka” moment.

Find me on Google+

The rise of analytics

Published in The Hindu – The rise of analytics

We are slowly, but surely, heading towards the age of “information overload”. The Sloan Digital Sky Survey started in the year 2000 returned around 620 terabytes of data in 11 months — more data than had ever been amassed in the entire history of astronomy.

The Large Hadron Collider (LHC) at CERN, Europe’s particle physics laboratory, in Geneva will during its search for the origins of the universe and the elusive Higgs particle, early next year, spew out terabytes of data in its wake. Now there are upward of five billion devices connected to the Internet and the numbers are showing no signs of slowing down.

A recent report from Cisco, the data networking giant, states that the total data navigating the Net will cross 1/2 a zettabyte (10 {+2} {+1}) by the year 2013.

Such astronomical volumes of data are also handled daily by retail giants including Walmart and Target and telcos such as AT&T and Airtel. Also, advances in the Human Genome Project and technologies like the “Internet of Things” are bound to throw up large quantities of data.

The issue of storing data is now slowly becoming non-existent with the plummeting prices of semi-conductor memory and processors coupled with a doubling of their capacity every 18 months with the inevitability predicted by Moore’s law.

Plumbing the depths

Raw data is by itself quite useless. Data has to be classified, winnowed and analysed into useful information before if it can be utilised. This is where analytics and data mining come into play. Analytics, once the exclusive preserve of research labs and academia, has now entered the mainstream. Data mining and analytics are now used across a broad swath of industries — retail, insurance, manufacturing, healthcare and telecommunication. Analytics enables the extraction of intelligence, identification of trends and the ability to highlight the non-obvious from raw, amorphous data. Using the intelligence that is gleaned from predictive analytics, businesses can make strategic game-changing decisions.

Analytics uses statistical methods to classify data, determine correlations, identify patterns, and highlight and detect deviations among large data sets. Analytics includes in its realms complex software algorithms such as decision trees and neural nets to make predictions from existing data sets. For e.g. a retail store would be interested in knowing the buying patterns of its consumers. If the store could determine that product Y is almost always purchased when product X is purchased then the store could come up with clever schemes like an additional discount on product Z when both products X & Y are purchased. Similarly, telcos could use analytics to identify predominant trends that promote customer loyalty.

Studying behaviour

Telcos could come with voice and data plans that attract customers based on consumer behaviour, after analysing data from its point of sale and retail stores. They could use analytics to determine causes for customer churn and come with strategies to prevent it.

Analytics has also been used in the health industry in predicting and preventing fatal infections in infants based on patterns in real-time data like blood pressure, heart rate and respiration.

Analytics requires at its disposal large processing power. Advances in this field have been largely fuelled by similar advances in a companion technology, namely cloud computing. The latter allows computing power to be purchased on demand almost like a utility and has been a key enabler for analytics.

Data mining and analytics allows industries to plumb the data sets that are held in the organisations through the process of selecting, exploring and modelling large amount of data to uncover previously unknown data patterns which can be channelised to business advantage.

Analytics help in unlocking the secrets hidden in data and provide real insights to businesses; and enable businesses and industries to make intelligent and informed choices.

In this age of information deluge, data mining and analytics are bound to play an increasingly important role and will become indispensable to the future of businesses.

Find me on Google+

The “Internet of things”

Published in The Hindu, Sep 22, 2010 by Tinniam V Ganesh – http://bit.ly/9Jlwx5

We are progressively moving towards a more connected world, using a variety of devices to connect to each other and to the Net. We are connected to the network through the mundane telephone, mobile phone, desktop, laptop or iPads. We use the devices for sending, receiving, communicating or for our entertainment. In 2005, the International Telecommunications Standardisation Sector (ITU-T), which coordinates standards for telecommunications on behalf of the International Telecommunication Union, came up with a seminal report, “The Internet of Things.” The report visualises a highly interconnected world made of tiny passive or intelligent devices that connect to large databases and to the “network of networks” or the Internet.

This ‘Internet of Things’ or M2M (machine-to-machine) network adds another dimension to the existing notions of networks. It envisages an anytime, anywhere, anyone, anything network bringing about a complete ubiquity to computing. In Mark Weiser’s classic words, “the most profound technologies are those that disappear and weave themselves into the fabric of everyday life until they are indistinguishable from it”. This will result in the metamorphosis of the network from a dumb pipe to intelligence at the edges. Embedded intelligence in the things themselves will further enhance the power of the network.

The portents of this highly revolutionary technology are already visible. The devices in this M2M network will be made up of passive elements, sensors and actuators that communicate with the network. Soon everyday articles from tyres to toasters will have these intelligent devices embedded in them.

RFID tags

Radio Frequency Identification (RFID) was the early and pivotal enabler of this technology, with a tiny tag responding in the presence of a receiver which emits a signal. Retailers keep track of the goods going out of warehouses to their stores with this technology.

In a typical scenario one can imagine a retail store in which all items are RFID tagged. A shopping cart fitted with a receiver can automatically track all items placed in the cart for immediate payment and check-out. Another interesting application is in the payment of highway tolls. Similarly, plans are already afoot for embedding intelligent devices in the tyres of automobiles. The devices will be used for measuring the tyre pressure, speed etc., and warn the drivers of low pressure or tyre wear and tear. The devices will send data to the network, which can be processed.

This technology is also well suited for insurance companies which can give discounts to safe drivers based on the data sent by these sensors. Other promising applications include an implantable device capable of remote monitoring of patients with heart problems. It can warn the physician when it detects an irregularity in the patient’s heart rhythm.

The ‘Internet of Things’ can also play an important role in monitoring the stress and the load on bridges and forewarn when the stress is too great and a collapse is imminent. In mines, the sensors can send real-time info on the toxicity of the air, the structural strength of the walls or the possibility of flooding.

The day is not far off when devices will connect to the Internet to monitor and control the environment, improving our daily lives and warning us of impending hazards.

Find me on Google+

The evolutionary road for the Indian Telecom Network

Published in Voice & Data Apr 14, 2010

Abstract: : In this era of technological inventions, with the plethora of technologies, platforms, paradigms, how should the India telecom network evolve? The evolutionary path for the telecom network clearly should be one that ensures both customer retention and growth while at the same time be also capable of handling the increasing demands on the network .The article below looks at some of the technologies that make the most sense in the current technological scenario The wireless tele-density in India has now reached 48% and is showing no signs of slowing down. The number of wireless users will only go up as the penetration moves farther into the rural hinterland. In these times Communication Service Providers (CSPs) are faced with a multitude of different competing technologies, frameworks and paradigms. On the telecom network side there is the 2G, 2.5G, 3G & 4G. To add to the confusion there is a lot of buzz around Cloud technology, Virtualization, SaaS, femtocells etc., to name a few. With the juggernaut of technological development proceeding at a relentless pace Senior Management in Telcos, Service Providers the world over are faced with a bewildering choice of technology to choose from while trying to maintain the spending at sustainable levels. For a developing economy like India the path forward for Telcos and CSP is to gradually evolve from the current 2.5G service to the faster 3G services without trying to rush to 4G. The focus of CSPs and Operators should be in customer retention and maintaining customer loyalty. The drive should be in increasing the customer base by providing superior customer experience rather than jumping onto the 4G bandwagon. 4G technology, for example LTE and WiMAX, make perfect sense in countries like US or Japan where smart phones are within the reach of a larger set of the populace. This is primarily due to popularity and affordability of these smart phones in countries like the US. In India smartphones, when they come, will be the sole preserve of high flying executives and the urban elite. The larger population in India would tend to use regular mobile phones for VAS services like mobile payment, e-ticketing rather than downloading video or watching live TV. In US, it is rumored, that iPhones with their data hungry applications almost brought a major network to its knees. Hence, in countries like US, it makes perfect sense for Network Providers to upgrade their network infrastructure to handle the increasing demand for data hungry applications. The upgradation to LTE or WiMAX would be a logical in countries like US. In our nation, with the growth in the number of subscribers, the thrust of Service Providers should be to promote customer loyalty by offering differentiated Value Added Service (VAS) service. The CSPs should try to increase the network coverage so that the frustration of lost or dropped calls is minimal and focus on providing superior customer experience. The Service Providers should try to attract new users by offering an enhanced customer experience through special Value Added Services (VAS). This becomes all the more important with the impending move to Mobile Number Portability (MNP). Once MNP is in the network many subscribers will switch to Service Providers who offer better services and have more reliable network coverage. Another technique by which Service Providers can attract and retain customers is through the creation of App Stores. In US, app stores for iPhone have spawned an entire industry. Mobile Apps from app stores besides providing entertainment and differentiation can also be a very good money spinner. While the economy continues to flounder the world over the Service Providers should try to reduce their Capacity Expenditure (Capex) and their Operating Expenditure (Opex) through the adoption of Software-as – Service (SaaS) for their OSS/BSS systems. Cloud technology, besides reducing the Total Cost of Ownership (TCO) for Network Providers can be quite economical in the long run. It is quite possible that prior to migrating to the Cloud all aspects of security should be thoroughly investigated by the Network Providers and critical decisions as to which areas of their OSS/BSS they would like to migrate to the Cloud. While a move to leapfrog to 4G from 2G may not be required, it is imperative that with the entry of smartphones like iPhone 3GS, Nexus One and Droid into India the CSPs should be in a position to handle increasing bandwidth requirements. Some techniques to handle the issue of data hungry smartphones are to offload data traffic to Wi-Fi networks or femtocells. Besides, professionals these days use dongles with their laptops to check email, browse and download documents. All these put a strain on the network and offloading data traffic to femtocells & Wi-Fi have been the chosen as the solution by leading Network Providers in the US. Conclusion So the road to gradual evolution of the network for the Network Operators, Service Providers are 1. Evolve to 3G Services from 2G/2.5G. 2. Create app stores to promote customer retention & loyalty and offer differentiated VAS services 3. Improve network coverage uniformly and enhance the customer experience through specialized App stores 4. Judiciously migrate some of the OSS/BSS functionality to the cloud or use SaaS after investigating the applications of the enterprise that can move to the cloud 5. Offload data traffic to Wi-Fi networks or femtocells.

Tinniam V. Ganesh

Find me on Google+