Optimizing energy usage through Seagate PowerChoice

Particularly as solutions continue to evolve, there is likely to be a greater need to carefully analyze the costs of new technology deployments. In order to achieve the level of analysis required to lower overall cost, businesses will need to build greater expertise across the entire cloud environment – from software applications down to the hardware. There are numerous factors at play when it comes to data center total cost of ownership – including cooling costs and how often the storage devices are utilized. Power usage accounts for a significant chunk of data center TCO, and will likely account for more as performance and capacity demands increase.

Data center power consumption has been rising over the past several years. As a September New York Times article noted, U.S. data centers consumed a total of 76 billion kilowatt-hours of power in 2010 – amounting to approximately 2 percent of all the energy used in the country. Despite growing capacity demands, data center operators are tasked with lowering energy consumption without sacrificing performance.

Optimization within the framework of a complex company data center requires dynamic technology. As storage capacities escalate due to virtualization and growing volumes of data, managing power usage in an intelligent way is critical for reducing TCO. Seagate's PowerChoice – specifically developed for enterprise environments – offers organizations not only more energy efficiency, but more control over the amount of power hard drives consume.

PowerChoice expands on the energy saving principles exhibited by PowerTrim – earlier Seagate technology that allows for energy-saving during periods of command inactivity – by allowing for greater power reductions. As idle time increases, the power-saving benefits also rise, but, more importantly, the drives will still quickly respond to commands even after long idling periods. In addition, PowerChoice supports four customizable modes to give businesses significantly more control of their drives' power usage and allowing for up to a 54 percent reduction in the amount of energy used.

For context, a 2011 Ars Technica article found that a 1TB hard disk drive that runs at 3.0 Gb/s with a 32MB cache consumes an average of 8.4 watts. For a machine running 1,000 such hard drives 24 hours per day, this translates to approximately 201.6 Kwh of electricity used per day – or 73,584 per year. According to the U.S. Bureau of Labor statistics, the average price per Kwh in the United States is $0.135. This means just running these hard drives would cost $9,933. However, using PowerSave technology would potentially cut this machine's Kwh usage down to 39,735.36 per year, bringing the cost down to $5,364.27. The actual savings to a data center manager are likely to be much greater, considering the much higher capacity and performance drives found in today's centers. Furthermore, as data centers expand to include even more storage, the value of efficient technology increases dramatically.

Those numbers are in-line with the power usage one expert estimated for Facebook's servers. The projection says each Facebook server uses around 300 watts of electricity – not to mention the company may potentially run more than 180,000 machines. The average power usage for other data centers is likely to be higher, since Facebook places an even greater premium on efficiency.

Carrying those savings across the total number of drives likely to be found in today's data center yields even more impressive results. Some estimates say a large data center like the kind run by Facebook hosts as many as 100,000 hard drives. But, even a data center at 10 percent of the total capacity of Facebook's would see tens of thousands of dollars in savings.

2012-11-19T13:48:48+00:00

About the Author: