Riding The Cloud of Multi-million Worth of Infrastructure

Do you still remember the OLPC (One Laptop for Child) program and other (if any) with similar purpose? Let me help you lessen the time for googling by citing the mission of this program from its official cite, “To create educational opportunities for the world’s poorest children by providing each child with a rugged, low-cost, low-power, connected laptop with content and software designed for collaborative, joyful, self-empowered learning.”  From a recipient’s point of view, what could this be? The meanings and implications may differ however one thing for sure, the recipient -given the condition he/she can’t afford laptop at normal price- is given luxury at very affordable price, the luxury of owning the gate to access the advancement of technology.

Similar story now may be repeated in the near future in computation area. It’s less than decade from year 2000 but if we compare the computing power of a node (e.g.: standalone PC, server, workstation) in that year with today’s, the discrepancy is very significant. In less than a decade we have seen how computers are becoming faster and how hardware prices are becoming cheaper. These days, multi core processors are becoming more common thus multiplying the computation speed of those of single core.

The network itself is also developing up its capability faster. Twenty years ago, idea about terabit ethernet could seem to be too myopic to discuss. These days, gigabit ethernet is becoming common hence the era of terabit or petabit may come in next one or two decades. What’s essential from this advancement is the idea of connecting several distributed resources spanning through different networks to collaborate in order to provide much more powerful computing system can be more leveraged.

Imagine a datacenter with thousands of workstations in California is now cooperating with some other data centers in Atlanta to process queries to complex computation. Despite the distributed nature of the networks, datacenters can coordinate the tasks by sharing parts of the computation to nodes in each by almost neglecting the fact that they are located in different regions. This means, if the inter-network speed is not fast enough to transmit the packets, such coordination and communication between distributed networks would yield more issues than benefits of larger quantity of computing nodes.

Big companies like Google or Amazon have a lot of computing nodes distributed in several datacenters. They power the very fast computing process Google provides in every result of search query and complexity in e-commerce system like what Amazon tackles. Rumors floating mentioned that Google has more than a hundred thousand computing nodes. The nodes, besides giving astonishing computing power also hinting redundancy when load is not high. From executive’s point of view, redundancy is equal to inefficiency and a mechanism should be found to solve the issue with good benefit to the company.

It’s been clear that individual users don’t have the luxury of possessing thousands of PCs or computing nodes. Therefore it can be a smart idea to offer users the experience of running computations in a super-computer like environment with some degree of guaranteed QoS stipulated in SLA (Service Level Agreement).  Users are given access to utilize the infrastructure by using the redundancy in the system or unused cycles. Even though it’s not a full utilization of the infrastructure alone, the offer could be very tempting. If it only needs a credit card or simple e-payment transaction to run an application which is backed up by thousands of processors in a complex system that has been successfully proven to run a very massive application serving millions of users everyday, will somebody not really care to at least test what the difference of his dedicated hosting with this one? Imagine you have $100 and you can use infrastructure built with the fund a thousand times (or more) bigger than your current nominal on hand, the offer might seem to be too hard to resist.

Thanks to you since you have voluntarily contributed to the profit of the company. Hat-off for the creativity and the smart idea behind this paradigm. It’s already been there so just let’s give our warm welcome to the era of cloud computing.

Leave a Reply

Your email address will not be published. Required fields are marked *