Software-defined approaches help remake data centers in age of the cloud

With the rapid emergence of cloud computing services that cover a comprehensive range of resources from infrastructure to end-user software, it would seem that many organizations no longer require data centers. A lot of startups, for instance, have never managed their own facilities, yet have succeeded in onboarding millions of users with a provider's help.

TechTarget's Beth Pariseau recently looked at the case of Flipboard, the makers of a popular mobile news and social app. Flipboard has relied on three-year reserved instances from Amazon Web Services to reach 100 million readers with only three full-time staff.

But what works for one organization isn't necessarily a panacea for the IT challenges at all enterprises. Even AWS vice president Andy Jassy has publicly stated that public cloud providers understand the situation of companies that have still significant assets on-premises. An IDC study estimated that only 13 percent of enterprise data is currently in the cloud, and furthermore that more than 60 percent of it would never reside there.

Far from dying out, enterprise data centers and private clouds are being remade with new technologies. Uptake of mobile devices and broadband Internet have put considerable demands on traditional infrastructure, but organizations have responded by deploying solutions that are both high-performing and economical.

More specifically, software-defined storage and networking may play a key role in boosting the ability of IT architectures to deliver applications and services consistently and cost-effectively, through the use of industry-standard, modular cloud hardware instead of proprietary appliances. In addition, the use of SDS and SDN makes it possible for enterprises to scale operations even if they maintain significant assets in a private cloud.

Large enterprises such as Bank of America take a software-defined approach to IT
The software-defined data center – with all important technologies decoupled from the underlying hardware – is still just an idea for most organizations. However, at least a few large financial services providers are moving in that direction.

Outfits such as Goldman Sachs and Bank of America are notable for both the scope of their respective operations and the number of IT personnel they require to maintain their systems, so it's no surprise that these organizations would be on the cutting-edge when it comes to data center optimization. The financial sector across the globe is expected to spend $430 billion on IT this year, or about 20 percent of the worldwide total, according to IDC.

Bank of America executive David Reilly has stated that going forward there will be less differentiation in hardware, creating incentives to abandon proprietary devices in favor of commodity alternatives. Such an overhaul could entail the increasing use of x86 systems and storage media such as performance and/or capacity HDDs, both of which can be used with software layers for intelligent management of resources and reduced dependencies on specific endpoints.

Earlier this year, Bank of America worked with vendors on several private cloud environments, including one based on OpenStack, the software for building cloud computing architectures that are open, eminently modifiable and capable of running on a wide range of underlying infrastructure. The goal was to reduce costs by as much as 50 percent and put the bank in position to easily respond to changes by leveraging flexible resources.

Letting software do most of the work of managing and orchestrating workloads has the potential to drive down capital expenditures on hardware and open up customization possibilities suited to the organization's needs. Moreover, SDS and SDN contribute cloud-like scalability in a secure environment, a crucial goal for enterprises that are keen to overhaul and grow their operations yet have lingering concerns about data security and compliance – especially in regulated industries such as finance.

"[A software-defined approach] dramatically changes the way organizations like ours deliver technology services internally," Reilly told CIO Insight's Samuel Greengard. "The ability to take a software-defined approach to a device – server, firewall, network switch, storage device, whatever – is a huge step forward in terms of functionality. The approach adds a great deal of speed and flexibility. One of the problems with traditional infrastructure models and even the current cloud environment is that too many resources are fixed. It's increasingly necessary to flex up and down dynamically – all within a secure infrastructure."

How important is flash to software-defined storage?
While organizations can take a software-defined approach to many types of devices, let's look at storage media in particular, and how SDS is currently doing in data centers. The most recent version of DataCore's "The State of of Software-Defined Storage" surveyed 388 IT professionals, finding that many of them took up SDS as a way to reduce migration headaches between devices from different vendors and manage increasingly diverse hardware fleets.

Such rationale is hardly unusual, but what is surprising is the number of respondents admitting that flash played only a small part in their SDS deployments so far. The relative costs of SSDs and HDDs are still relevant for many buyers who are, after all, trying to set up custom systems that meet business requirements while staying within budget. But the reasons for choosing HDDs for some workflows are multifaceted.

"With all of the hype around the 'all-flash datacenter,' it is clear that new fast hardware technology alone is not the answer," stated Evaluator Group's Randy Kern in the report, according to SiliconANGLE. "There are multiple critical factors preventing organizations from making this move – lack of smart software that integrates and optimizes their use, the relative high cost and the realizations that not all applications benefit from flash devices."

Given the variety of workloads in a data center, from intensive computing operations to cold storage, flash read/write cycles can be wasted on tasks that don't benefit from exceptional speed. Hybrid storage arrays that leverage both SSDs and HDDs are often more efficient and cost-effective than all-flash setups.

With new media such as Seagate's Enterprise Capacity 3.5-inch HDD v4 pushing the envelope, enterprises have more options than ever before for building cloud storage systems that suit their operations. The public cloud may have changed how companies approach IT, but there's now the opportunity to learn from its innovations and remake data centers into more scalable and economical facilities.

2014-04-24T16:12:21+00:00

About the Author: