Cloud & Data Center

Six Strategies to Positively Impact Enterprise Storage in 2013

By TMCnet Special Guest
Rob Commins, VP of Marketing, Tegile Systems
  |  March 15, 2013

This article originally appeared in the March 2013 issue of INTERNET TELEPHONY.

In 2012, we saw the rise of a number of new technologies and a number of new players across the IT spectrum. While 2012 was packed with technology innovation and developments, 2013 will see even greater changes likely to impact the enterprise storage industry.

Converged Infrastructure Players Seeks a Niche

Many large firms and small startups alike have been using the phrase, “converged infrastructure,” to explain the notion of pools of assets that can deliver storage, server and networking resources to applications. These assets are managed as a single entity that can be provisioned, monitored and administered from a single location.

While there may be initial promise to such solutions, they introduce a significant element of inflexibility into the organization, defying the general aims and goals of virtualization, which promise endless flexibility.

Both small and large players have yet to address the issue of flexibility, leaving converged solutions as choices for only a subset of the market. As one reviews large converged infrastructure plays, for example, it is discovered that they have narrow configuration options. On the other end of the spectrum, emerging converged infrastructure devices do not allow organizations yet to separately address dwindling resources.

For hardware that is supposed to simplify the data center, it will be quite some time before converged hardware can do so in a truly cost-effective way that enables administrators to fully balance all resources.

Virtualized Infrastructure Management Becomes Increasingly Complex

There are multiple trends coming together that are going to create new challenges for the 2013 virtualization administrator. First, the industry continues to see the expansion of virtualization to an ever-increasing number of workloads while, at the same time, many organizations are beginning to explore adding a secondary hypervisor to their IT environments.

It’s already well known that the majority of new workloads run on the virtualization layer rather than being deployed on physical servers. However, companies have now become so comfortable with their virtualized infrastructures and hardware ? especially storage ? that even the most intensive application can safely reside the virtualized environment. After lagging for a number of years, 2012 saw the rise of a number of new storage vendors with the ability to bring this promise to virtualization.

2012 also saw the release of Microsoft’s (News - Alert) Hyper-V 2012, an enterprise-grade hypervisor with enterprise-grade features. With many companies considering the implementation of Hyper-V as a secondary hypervisor, environment management will become more complex in 2013, although it will become easier as multi-hypervisor management tools grow in capability and maturity.

Hybrid Economics Drive VDI Forward

For years the market has been saying that this is the year that virtual desktop integration arrives. However, 2013 may very well be the year that this recurring mantra finally comes true and VDI gains a strong hold in the enterprise. Unfortunately for many CIOs, VDI deployments have been plagued by the need to deploy very costly storage, which has driven up the cost of VDI to a point where the financials no longer made sense.

The reason is simple: Storage arrays have finally come of age from both a performance and pricing standpoint, bringing VDI into the realm of financial reality. Tegile believes we’ll see VDI as a nascent slice of the mobile workforce device market share in 2013.

That being said, there is still significant work to be done to get the acquisition cost and the operational economics for VDI to work. Customers are still telling us that storage takes upwards of 40 percent of the budget for a VDI implementation, but we’re working hard to bring that number down to help make VDI deployments a reality.

Big Data Not a Big a deal to Storage Admins

In 2012, big data hit the wires in a big way. Customers love what big data stands for and the promise that it can bring to decision-making and helping organizations find the next big thing in what otherwise appears to be random information. So, the idea and the promise of big data has big understanding, but there is still big uncertainty around how the advent of big data will make things so dramatically different for IT. After all, while big data may require big infrastructure, hasn’t IT always had to provide robust services?

As has always been the case, users still need blazingly fast infrastructure running at low latencies and large repositories for unstructured data. This sounds awfully familiar to what we in the IT market have been working on for many years now. What appears to be different is the confluence of cost, performance and the ability to leverage cloud-based solutions to answer bigger questions than we’ve been able to in the past. That’s the big deal – not the big data itself. Still, in the end, yes, big data should help business drive big earnings. To the IT team in the boiler room, managing all this data will likely look very familiar.

Today’s big data initiatives are leveraging emerging hardware and software tools, some of which only became big news in 2012. Emerging hybrid and all-flash arrays both have the potential to support even the most intensive big data projects. Further, new tools, such as Hadoop have come on the scene, enabling quick deployment for organizations that want to jump on the big data bandwagon.

Solid State Drives Go Mainstream

For a decade, the storage market sat maxed out with spinning drives at 15K RPM and didn’t move much beyond that until the last couple of years. Relatively recently, though, solid state drives hit the market and, over time, have dropped in price and become all but mainstream, a process we expect to see move to completion in 2013.

The move to SSDs has been aided by the aforementioned price drops, but also by the need by organizations to better balance their storage capacity costs with their storage performance costs. In recent years, SSDs have really made an impact in how we optimize storage tiers for IOPS. Much has been written about SSDs being the death knell for hard drives. The almost religious position some vendors have taken is reminiscent of the tape- is-dead debate that has been going on for almost 20 years. The hard drive industry has proven time and time again that with hard-core chemistry, engineering and tribology, the super-paramagnetic limit can be lifted and the $/GB curve will continue to drop, bringing further decreases in the cost per gigabyte of hard drives. 

In 2013 and beyond, there will still always be a place for hard drives that are optimized for $/GB. HDDs optimized for $/IOP are a dying breed.

The job of optimizing for IOPS will instead move to SSDs, but with a twist. Read on.

Hybrid Storage Architectures Move Center Stage

It’s a fact that SSDs carry with them lower costs per IOPS than HDDs, but, on the flip side, HDDs bring a $/GB figure that is orders of magnitude lower than even the least expensive SSD. It is this balance between cost and performance where the beauty of a hybrid approach to storage becomes apparent. There is simply no other way at present to better balance the very low cost per GB of HDDs and the very low cost per IOPS of SSDs.

Even as SSDs are dropping in price, we do not see $/GB price parity with HDDs coming anytime soon, and certainly not in 2013 or 2014.

The acquisition and operational cost of SSDs is far superior to HDDs. This is why so many newer vendors are focusing on hybrid architectures and using SSDs for performance optimization and HDDs for capacity optimization. Although there are a few niche scenarios that call for an all-SSD shared storage system, the opportunity for hybrid arrays is far bigger and far more reasonable for mainstream IT workloads.

Rob Commins is vice president of marketing for Tegile Systems, a provider of primary storage de-duplication in virtualized server and desktop environments.




Edited by Braden Becker