Home > Technology > Choosing Storage for Virtualized Environment

Focus On Throughput To Get The Most Performance For Your Money
by Robyn Weisman

You’ve taken this whole virtualiza- tion thing to heart, consolidating your serv- ers and even migrating certain departments to a VDI (Virtual Desktop Infrastructure). But in this move to make your data center more agile and energy-efficient (among so many things), you’re running into an IT bottleneck because your legacy storage just can’t seem to keep up with the new demands posed by your virtual environ- ment. And the problem with sticking to your legacy storage is that you run into all kinds of performance problems, says Ed Lee, lead architect at virtual storage solu- tions provider Tintri (www.tintri.com).  Lee has seen many instances where an SME has virtualized, say, 20 to 30% of its infrastructure and discovers it cannot virtu- alize the next 10 to 20% without spending boatloads of money. “By that point, it’s much harder to debug your infrastructure because you’ve already got a bunch of applications running on this legacy storage, and so if one virtual machine is suffering, you can’t tell why it’s suffering or how to fix it,” he says.  hard drive, the cost per I/Ops may end up being the same,” Sloan says. “And if you need to support a VDI, it may make more sense to buy one or two SSDs, rather than buying a whole bunch of hard drives and striping the traffic over all of them.”

Auto-Tier Your Storage Environment

Buying an all-SSD RAID is too expen- sive for most SMEs to contemplate, at least for now. As a result, auto-tiering—a stor- age virtualization technology designed to move data from one tier of disk to another in a nondestructive fashion—enables enter- prises to control and leverage their mixed storage environments. “Your server sees its storage volume [from which] it’s reading and writing data, but behind that volume, some of the actual data blocks are being written to SAS drives and others to the SSD, for example,” Sloan says.

Auto-tiering is especially useful when virtualizing desktops. “At certain times of the day, such as when everybody is turn- ing on their virtual desktops, auto-tiering lets you move those ‘bootstorms’ to SSDs that can handle the tremendous amount of  the first three chips don’t get written to all the time, but they do have a life cycle and have to be replaced at some point,” he says.

Rethink Storage Protocols

Tintri’s Lee recommends that SMEs consider NFS storage protocols when buy- ing storage for virtualized environments. “A lot of small shops tend to start with iSCSI because it’s more of a general-pur- pose storage used for a wider range of applications, but for virtualization specifi- cally, NFS is simpler, and the performance is really good,” Lee says.

  • Start thinking about cost per I/Ops rather than cost per gigabyte when buying stor- age for virtualized environments.
  • Implement auto-tiering technologies to manage your hybrid storage environment.
  • Consider buying storage that uses a file- oriented protocol such as NFS, rather than a block-level one, for virtualized storage.

Key Points

Given that most data centers are still running hybrid envi- ronments, you don’t have to run out and replace all of your storage en masse. However, you do want to make sure that the storage you purchase from this point forward can keep up with your virtualization archi- tecture. Here are some things to keep – Info-Tech’s John Sloan in mind.

Think In Terms Of I/Ops

As the density of virtual machines grows, it drives more I/O traffic to storage, which can cause obvious gridlock. This issue is especially common when organi- zations implement VDI, says John Sloan, lead analyst at Info-Tech.  “In the past, a professional firm with a modest number of Windows servers would consolidate those servers, connect them to an iSCSI or Ethernet-connected SAN, and get good enough performance from that platform,” Sloan says. “But when that same firm wants to roll out virtual desktops to all 500 of its professionals, those 500 virtual desktops are driving a lot more I/O traffic to that storage, which can’t handle it.”  Because throughput has become so important in a VDI environment, IT needs to re-evaluate how it judges the cost of storage. People have tended to focus on cost per gigabyte, but if you need to increase your I/O capacity, SSD and flash drives may end up being more cost-effi- cient in the long run. Unlike hard drives, which can only handle hundreds of I/Ops, SSDs can process thousands of I/Ops.  “So while an SSD may cost $30 to $40 per gigabyte vs. $1 per gigabyte for a [traditional]  “At certain times of the day, such as when everybody is turning on their virtual desktops, auto-tiering lets you move those ‘bootstorms’ to SSDs that can handle the tremendous amount of I/O traffic going to the storage arrays so that they don’t bog down. Then later [that traffic] can be moved to a different, less-expensive tier.”

I/O traffic going to the storage arrays so that they don’t bog down. Then later [that traffic] can be moved to a different, less- expensive tier,” Sloan says.

Sloan points out that SSDs typically are more resilient than traditional hard drives because they have no moving parts—but that doesn’t mean they’re indestructible. SSDs can wear out from having data written on indi- vidual flash chips within the drive itself. “The manufacturers for enterprise-class SSDs are doing things like making sure that writes are spread out evenly across all the chips so that  “Most companies are more familiar with block protocols like iSCSI and Fibre Channel because they’ve been around so long, but running a NAS that uses a file-oriented pro- tocol like NFS is easier to configure and maintain, which means less money spent on that storage system,” adds Chris Bennett, vice president of marketing at Tintri.  “Once you get a storage system better suited for VMs, it frees up more of the gen- eral-purpose storage to service your legacy applications, giving you the biggest bang for your buck,” Lee says. P

Testing Your Virtualized Storage

If you’re planning to move toward using storage virtualization to manage at least a portion of your overall data center storage, first figure out what goals you’re trying to achieve, says Chris Bennett, vice president of marketing at Tintri (www.tintri.com). “Identify one or more pilot projects to experiment with and learn first-hand the most effective way to move forward,” he says.

And while you probably want to avoid using a mission-critical application as your virtualized storage guinea pig, make sure you pick something that is demanding from an I/O perspective but can be put offline or is a periodic process. “Pick a good moderate application with a good amount of I/O needs and think about the kind of storage that will work best for this application,” adds Ed Lee, lead architect at Tintri. “You’re going to be continually investing money into this, so you want to think about what’s going to work well for running these VMs.”