Cloud providers balance flexibility and specialization through collaboration

Technology-driven companies have long recognized the value of building in-depth expertise in specific areas of IT. It makes sense, especially when considering the growing complexity of cloud infrastructure. Providers have also learned that they can add versatility to their offerings through programs such as the Seagate Cloud Builder Alliance. The benefits of partnerships like these extend from vendors themselves to their customers, as cloud providers are able to draw on the specialized solutions and support of leading businesses in the industry. However, there may still be some value in fostering versatility within the cloud ecosystem.

Researchers at the Massachusetts Institute of Technology recently explored what would happen if data center hardware became more versatile. MIT News columnist Larry Hardesty compared this to a call center with specialized service representatives.

"It's far more cost-effective to train each customer service representative on the technical specifications of a single product than on all the products," Hardesty wrote. "But what if a bunch of calls about a product come in, and the center doesn't have enough specialists to field them?"

The same paradigm applies to data centers when a large number of diverse workloads enter the IT environment. When cloud hardware is designed for specific tasks, a sudden spike in demand could create performance bottlenecks. The MIT researchers found that designing a small percentage of generic servers – or hiring diverse call center reps – would result in an exponential decrease in service delays. 

Adding versatility to the cloud data center
One of the cloud's core value propositions has been cost effectiveness on a large scale, and that continues to be the case. However, as more companies migrate their assets, it is likely that providers will deal with an even more heterogeneous workload environment. It is not enough to only build versatile infrastructure: Companies must also have an intelligent program for managing resources across their hardware. According to MIT researcher John Tsitsiklis, this requires a more intelligent job scheduling algorithm.

Researchers suggested that an effective way to balance workloads for optimal efficiency would be to design the scheduling software so that it waits for a certain number of tasks before distributing them to different servers. By waiting for more jobs to come in, the algorithm would be able to more intelligently balance workloads according to the hardware's unique design.

Specialization is still a good idea
MIT's researchers suggested that versatile hardware design could improve service delivery considerably, but that does not diminish the value of specialization. Particularly as cloud providers work together to develop more robust service portfolios, it is possible to expand versatility while still being able to deliver highly specialized offerings.

As InfoWorld contributor Pete Babb recently noted, the concept of collaboration is what drove the New York Stock Exchange to build a "community cloud." The platform provides the infrastructure resources for all members to handle billions of transactions per minute, but the value extends beyond that.

"To set itself apart from other cloud platforms, the Capital Markets Community Platform also provides specialized services, apps, and analytics to suit the particular needs of its customers, with the idea it's not a cloud that users have to bend to their needs, it's a cloud built specifically with their needs in mind," Babb wrote.

The increasingly collaborative cloud environment is beneficial for providers because it enables them to expand their service offerings to fulfill niches they may not have otherwise been able to. These developments can also come with the benefit of lower total cost of ownership, due to the guidance from partnering companies that facilitate the adoption of best practices. 

2013-05-01T16:37:22+00:00

About the Author: