Is cloud computing a game changer?
Game changing technology allows us to do something which was not possible before. Take the instance of mobile phones that enable us to communicate from anywhere. It greatly simplifies some specific a task: the Internet drastically simplifies search and retrieval of information. Finally, it makes an unviable process viable by reducing the cost by an order of magnitude. Take the example of the railroad which reduced the cost of transporting goods, making this task viable over land.
Udayan Banerjee, CTO, NIIT Technologies wonders whether cloud computing a game changer, and if so, to what an extent? He conducts an objective evaluation by assessing cloud computing against the three criteria mentioned above.
Does it greatly simplify any specific task?
There is one thing which cloud computing simplifies to a great extent. It is the process of acquiring and releasing computing capacity, which can now be done on-the-fly, with no space constraints, no infrastructure constraints, no shipping delays and virtually no installation time.
This capability can be leveraged in many situations. For example, while developing a new release of an application, the normal practice is to take a scheduled shutdown of the system, reinstall the new application, do a quick check and release it to production. Most of the time, this process is smooth. Once in a while, an unseen problem erupts, which prolongs the down time.
In the cloud computing paradigm, there is no need to stop the production servers. A parallel production facility can be set up, where the latest application version can be deployed and tested. Once the installation passes all necessary regression tests, a simple switch can be performed, followed by decommissioning of the earlier infrastructure. The additional cost involved will only be marginal.
Does it make an unviable process viable by reducing the cost by an order of magnitude?
Cloud computing has been aptly described as a marketplace where a computing service provider, with a large number of networked computer systems, allows a computing service consumer to use a slice of that processing power and storage, with charges levied for actual usage. The cost saving therefore is achieved through economy of scale and minimization of idle resources.
Larger organizations can save by not having to plan for peak load. Smaller organizations can access expensive applications, where the usage may be sporadic.
Points to remember when designing Cloud application
Designing applications for the cloud is not business as usual. Though it is not rocket science either, it has several nuances which are somewhat counterintuitive.
Let me be more specific about what I mean, by designing an application for the cloud. First, let me exclude the whole universe of Software-as-a-Service (SaaS), where the applications are ready for use and need not be built. That implies that if you are planning to use Salesforce.com, Google Apps or any other similar service, this post may not be relevant. However, if you are planning to utilize Amazon EC2, Microsoft Azure or Google App Engine then read on.
The interest in cloud computing is primarily due to its “pay-for-what-you-use” approach, which also implies “do not pay for idle resources.”
First let us look at what resources you are charged for?
• CPU utilisation
– Amazon EC2 = Machine instance deployed (available in different capacities)
– Google GAE = CPU cycle used
– Azure = Instances of application deployed
• Data storage
• Data read-write
• Input-output bandwidth used
Let us look at it from the perspective of Amazon EC2, which is the most widely use cloud service.
• The first implication is that your application needs to scale to multiple machines when load increases and has to have the capability to acquire and release machine instances dynamically.
When you are scaling, do you take a small instance or a large instance? You will need to do a balancing act.
• If you acquire a machine instance but do not fully utilize it, you still pay for it. Clearly, there is a strong case for growing your CPU capacity, though at a smaller level.
• Since irrespective of the size of the machine, the basic O/S, system software and application software will take the same amount of memory: smaller machine = smaller percentage of memory available for application. There is a strong case here for using larger machine instance!
However, remember to best utilize what is available for free
• You pay for server power—client power is free. Therefore, why not design the application so that you utilize as much of client power as possible? HTML5 may become very useful and allow you to do several things on the client which would have been difficult to do earlier.
The charge for data read-write and usage of I/O bandwidth, is based on what you actually use, so:
• Optimizing your application to minimize bandwidth usage and reduce data read-write will give you a direct cost saving.
When you are dealing with multiple machines, you are dealing with parallelism. It is not easy to think parallel. It will require unlearning and diving into a new technology area.
• Understand Map-Reduce and Functional Programming.
• For specific requirements, there may be a more cost-effective alternative to the RDBMS. Look around; there are several alternatives such as the Apache CouchDB. Finally, what happens if Amazon decides to change the relative costs of the different types of resources? Your design trade-off decisions might get questioned. Suddenly you might find that the caching that you had done to reduce the data read-write has actually become counterproductive!
• Therefore, what you may need is a flexible design where you can dynamically turn on or turn off certain parts of the code.
In conclusion it can be said that while cloud computing may not be a game changer, it can certainly enable enterprises large and small, that understand its finer nuances, to derive significant cost benefits.