Open source community shows way forward for improving data center efficiency

What's most important to data center operators? Many IT managers strive for metrics such as high uptime and low power consumption while staying under a tight budget, but it isn't always clear what steps they should take to hit these targets. Moreover, organizations need specific ways to improve data center efficiency – and the open source community can help.

Initiatives such as the Open Compute Project have sparked conversations about how to reconceive the data center around custom takes on industry-standard appliances. Rather than lean on proprietary hardware that may be expensive and lock the customer into an inflexible contract, buyers can take up solutions that leverage low-cost, high capacity cloud storage and standard networking infrastructure such as Ethernet.

On top of that, open source software, most notably OpenStack, has made it increasingly feasible to forego integrated products and instead pair customized platforms with highly economical storage disks, switches and server racks. But even for enterprises that don't use OpenStack or participate in OCP, there's plenty for them to draw upon in terms of learning how to make data centers more operationally efficient by streamlining storage, network and compute and rethinking seemingly routine processes such as infrastructure cooling.

Open Compute Project may show the way forward on data center efficiency
The Open Compute Project's most prominent contributors are predominantly hyperscale operators, including Facebook and Rackspace. However, its evolution demonstrates how it pays for all data center operators to think about how to achieve energy efficiency and vanity-free design.

Facebook vice president of infrastructure Jay Parikh recently talked about how open source enabled the company to devote more resources to improving and scaling operations now that it didn't have to think so much about simply maintaining uptime. Open Compute Project has resulted in innovations in cold storage and motherboard design that have made it easier for Facebook to handle massive quantities of data while keeping everything vendor-agnostic. At the same time, open source software has played an important role in meeting the company's specific requirements.

"We ran a lot of open source software when we started Facebook, and, over the years, as Facebook grew, the infrastructure team only focused on one thing: Keep the site up and running," stated Parikh, according to SiliconANGLE. "We stretched the practical limits of every part of our infrastructure over and over again: The software, the hardware, the data center and the network. The story that we had to buy fans and cool off the data center is not folklore."

As a result of its optimizations, Facebook has been able to save $1.2 billion over the past 3 years. The OCP designs were 38 percent more energy efficient than their predecessors and cost 24 percent less. In addition to streamlining its hardware fleet, Facebook also rewrote parts of its front-end so as to obviate the need for additional server procurement.

Overall, Open Compute Project contributions made it much easier for Facebook to move quickly to improve efficiency, illustrating the value of knowledge and design sharing. It also learned a lot about cloud storage hardware in the process and accordingly considered different types of SSDs, HDDs and even Blu-ray Discs.

Facebook's infrastructure engineers initially favored magnetic storage due to cost considerations, but then gravitated toward flash after taking into account long-term factors such as failure rate and power draw. Eventually, the team settled for netbook-class SSDs that could be economically scaled but still high-performance. It also turned to Blu-ray cabinets that can store 1 petabyte of data each, with 50-year durability.

These types of considerations underscore the complex process of finding the right mixture of storage appliances for the data center – while flash is thousands of time faster than spinning media, managers should still consider what metrics are most applicable to each workflow. More specifically, flash is great for frequently accessed Tier 0/1 data, but its strengths may be less relevant for cold storage, as illustrated by Facebook's decision to go with Blu-ray. Many enterprises have taken similar tacks, opting for performance and capacity HDDs to handle cold storage.

The software-defined data center enables even greater flexibility in compute and storage
New approaches to hardware procurement and design are just one part of improving data center efficiency. The wave of software-defined technologies has yielded new opportunities for cost reduction and infrastructure improvement.

The software-defined storage market was already worth $360 million in 2013, and is expected to top $3.7 billion by 2016, according to SiliconANGLE. There are several key drivers behind its accelerating adoption, including the desire for better energy efficiency, scalability and security.

SiliconANGLE's Jack Woods examined some of the key requirements for building an effective software-defined data center. For example, more facilities now have to support high-performance computing, which requires enormous resources of processing and data storage. Since HPC equipment operates at relatively high density, it's important for operators to invest in appliances with good cooling mechanisms and energy-efficient designs. A sound HPC strategy also contributes to overall savings on power and money.

"Energy efficiency remains an item at the top [of] most enterprises cost concerns list," explained Woods. "Power stands as the largest operational cost within massive data centers, creating a need for a solution that maximizes energy efficiency without sacrificing performance."

Improving data center efficiency often means reducing expenses by finding a better answer to an old problem. With software-defined storage, it's possible to provision resources based on the needs of individual applications. Workloads can also be more easily migrated between facilities, achieving lower costs per workload

The flexibility and abstraction of software-defined storage means that data centers can be made less reliant on local infrastructure. Power loads can be shifted so that a single failure doesn't compromise all operations. This is an important step forward in light of the results of a recent survey from the Ponemon Institute and Emerson Network Power, which found that more than 8 in 10 respondents reported having lost utility power to a facility within the last two years.

2014-02-20T15:36:55+00:00

About the Author: