Open Compute Project sets focus on networking, modular data centers

Enterprises are always looking for ways to streamline their data centers. The emergence of software-defined technologies for cloud storage, networking and compute are a prime example of how these organizations are taking new approaches to managing data, with particular attention devoted to hardware optimization. By separating the intelligence that processes information from underlying appliances, companies can more readily adjust their cloud infrastructure in response to new business requirements and evolving technical conditions.

SDS and SDN are just two drivers of the current relentless push toward more efficient data centers. The Facebook-led Open Compute Project has been making strides in the custom hardware sector by spurring its community to rethink what organizations actually need from servers, network switches and other equipment. Components that were taken for granted or rarely considered in the past – such as a vendor logo on the side of a server, which, as it turns out, impedes airflow – have been removed or rethought with the aim of creating designs that are light on vanity and heavy on customizability and energy-efficiency.

The ambitious aims of the Open Compute Project have caught the attention of many of the world's largest technology firms. In addition to Facebook, Microsoft and IBM are now contributors to the initiative. Last month, Microsoft announced that it would be contributing designs for its highest-end data centers servers, the ones that power hyperscale services such as Windows Azure and Office 365.

Even though servers have received the most attention from contributors, innovation from the Open Compute Project spans a wide range of technologies. In particular, recent developments have highlighted the improvements to be made in network switches and Ethernet, underscoring the project's growing interest in not only building efficient appliances but also in finding the best ways in which to connect them. Ultimately, it's possible that open source cloud hardware could make data centers more capable of handling demanding workloads and scale-out operations.

Open Compute Project focus on networking creates opportunities to improve enterprise cloud storage
Open Compute Project contributors have also recently dipped their toes in the networking waters. Last year, they set their sights on creating a top-of-rack switch that would be agnostic of the underlying operating system while offering excellent performance and flexibility. The body highlighted the individual submissions of Broadcom, Mellanox, Intel and Cumulus Networks as likely to receive approval for OCP use.

At a broader level, the Open Compute Project has become increasingly interested in connecting new cloud infrastructure to existing IT investments. As Arthur Cole noted for Enterprise Networking Planet, the initiative's strategy for reaching this goal has perhaps been counterintuitive. Rather than promote the use of InfiniBand and PCIe, contributors have instead pushed technologies such as Fibre Channel and Ethernet.

For example, Seagate has announced additional development tools for its Kinetic Open Storage platform: The Kinetic Ethernet Drive and the Kinetic T-Card adapter. These tools support the replacement of HDD SATA and SAS rack backplanes with Ethernet while ensuring that applications and systems can still access standard SAS specifications. Kinetic Open Storage enables the direct connection of storage media to Ethernet with a key/value API, eliminating the storage server tier, producing lower total cost of ownership and encouraging simplicity and scalability.

Other vendors have also been keen to contribute networking innovations to the Open Compute Project. Mellanox showed off its 40 GbE ConnectX-3 Pro NIC that supports RDMA over converged Ethernet as well as overlay network offloads, resulting in high performance alongside low latency and reduced power consumption. The Open Compute Hackathon also brought to light Adaptive Storage, a project worked on by engineers from Facebook, Adapteva and I/O Switch Technologies. Adaptive Storage was built from industry-standard equipment and emphasized individual direct connections between disks and network switches. This way, each micro server can access any disk and coordinate with others to process data sets on Hadoop.

"Adaptive Storage raises fundamental questions about the way storage and compute are connected and about the power requirements for big data," stated team member Ron Herardian on the Open Compute Project blog. "In just 24 hours, with no budget and with a few boxes of computers, circuit boards and networking equipment, our small team of engineers was able to imagine a totally new way of organizing Hadoop data nodes, build and demonstrate a working prototype running on ARM processor-based micro servers using open source software, and show production-ready engineering CAD drawings for a production implementation."

Open Compute Project moves toward end-to-end data center infrastructure solutions
What's the consequence of these steps forward in open networking? Open Compute Project could evolve from an initiative primarily focused on server designs to something that provides guidance for creating an entire storage and networking setup.

Moreover, Facebook appears eager to make it as easy as possible for organizations to build and optimize data centers, and Open Compute Project is an important part of that effort. The company has recently been promoting its rapid deployment data center (RDDC) model, a framework that blends aspects of modular technology and home-building kits. It put RDDC into practice while constructing a second company data center in Luleå, Sweden.

"The idea here is to develop a set of instructions that a crew can go out and deploy a solution that we can deploy almost anywhere," wrote Facebook design engineer Marco Magarelli in a blog post. "We will continue to share our learnings about RDDC design and construction so the OCP community can contribute their ideas and help advance data center design and construction that much more quickly."

The modular data center isn't a new concept, but it is novel for organizations to make their designs publicly available, as Facebook as done via the Open Compute Project. Magarelli has compared the RDDC approach to building a car or assembling furniture. More specifically, automakers sometimes add components to a chassis before finishing the vehicle, while Ikea ships furniture that isn't assembled but instead separated into pieces that can be put together by the end user.

2014-03-14T12:26:57+00:00

About the Author: