39
1 Managing the information that drives the enterprise STORAGE Vol. 9 No. 6 September 2010 Virtualizing NAS File storage is getting out of hand and most NAS systems can’t scale to accommodate the growth. It’s time to consider virtualizing your NAS storage. P. 13 ALSO INSIDE 5 ILM is back! 8 Gotta have primary storage dedupe 21 Time for a network tune-up 27 Readers rate their midrange arrays 33 Backup for the 21st century 36 Just say no to storage stacks? 38 RAID is still first line of defense

Storage Mag Online Sept Updated 92010

Embed Size (px)

Citation preview

Page 1: Storage Mag Online Sept Updated 92010

1

Managing the information that drives the enterprise

STORAGEVol. 9 No. 6 September 2010

Virtualizing NASFile storage is getting outof hand and most NAS systems can’t scale to accommodate thegrowth. It’s time to considervirtualizing your NAS storage.P. 13

ALSO INSIDE5 ILM is back!

8 Gotta have primary storage dedupe

21 Time for a network tune-up

27 Readers rate their midrange arrays

33 Backup for the 21st century

36 Just say no to storage stacks?

38 RAID is still first line of defense

Page 2: Storage Mag Online Sept Updated 92010

STORAGEseptember 2010

RREEGGIIOONNAALL SSOOLLUUTTIIOONN PPRROOVVIIDDEERR

Page 3: Storage Mag Online Sept Updated 92010

Storage September 2010

STORAGEinside | september 2010

3 Cover illustration by ENRICO VARRASSO

Editorial: ILM Lives Again!5 EDITORIAL Information lifecycle management faded into

oblivion without getting serious notice. But it’s back with a new name and more realistic goals. by RICH CASTAGNA

Primary Storage Dedupe: Requisite for the Future8 STORWARS Few companies are immune to the runaway

growth of file data. While automated tiering and thin provisioningcan help users cope with capacity demands, more drastic measures, like primary storage data reduction, are needed. by TONY ASARO

Virtualizing NAS13 Traditional file storage typically lacks the scalability most

companies require and results in disconnected islands ofstorage. File virtualization can pool those strained resources and provide for future growth. by JACOB GSOEDL

Top 10 Tips for Tuning Your Storage Network21 Before you blame your storage system for poor performance,

take a look at your storage network. These 10 tips will help you find and fix the bottlenecks in your storage network infrastructure. by GEORGE CRUMP

Quality Awards V: Compellent Regains TopMidrange Arrays Spot27 Loyal, and apparently very satisfied users,

propelled Compellent to the front of the pack among midrange storage systems. Read all of the results of our fifth Quality Awards service and reliability survey for midrange arrays. by RICH CASTAGNA

Getting in Front of Backup33 HOT SPOTS Adding disk to the process revolutionized backup,

but now the focus is shifting from back-end hardware to newer front-end technologies, like CDP, replication, source-side deduplication and snapshots, that are poised to provide even greater backup efficiencies. by LAUREN WHITEHOUSE

Storage Vendors are Stacking the Deck36 READ/WRITE Storage vendors have been busy creating

server-to-application product stacks. It looks like the type of ploy that will give them more leverage and take it away from you. by ARUN TANEJA

RAID Still Rules for First-Line Defense38 SNAPSHOT RAID has taken some knocks lately, like criticism

that it’s a nearly 30-year-old technology that can’t stand up to the rigors of a modern data storage environment. But 96% of the respondents to our survey said they rely on some form of RAID. by RICH CASTAGNA

Vendor Resources39 Useful links from our advertisers.

Page 4: Storage Mag Online Sept Updated 92010

3PAR Inc. | 4209 Technology Drive, Fremont, CA 94538510.413.5999 | www.3PAR.com

Page 5: Storage Mag Online Sept Updated 92010

Storage May 2010

Tim

efo

ra

net

wo

rktu

ne-

up

Vir

tual

izin

gN

ASPr

imar

yst

ora

geda

tade

dupe

Qu

alit

yAw

ards

:M

idra

nge

arra

ysJu

stsa

yno

tost

ora

gest

acks

?

tHE PHRASE “information lifecycle management” seemed to serve as a curefor insomnia when it was first introduced. Even its acronym—ILM—failed tocatch on in an industry that loves acronyms. And saying “ILM” to a storagemanager produced glazed eyes, a stony silence or both.

But the concept of moving data to the most appropriate type of storagebased on its current usefulness (or age) still sounds like an idea worthwaking up to, doesn’t it? Everybody’s swimming upstream against a risingtide of data with fewer and fewer dollars tokeep them afloat, so why wouldn’t you wantto ensure that you’re not blowing bucks onexpensive storage for data with little or novalue?

Most shops do care and are taking a hardlook at where they put their data. You don’thear a lot of “ILM” chatter but, hey, that’s exactly what it is. When the idea of ILM rolledaround to open systems—hijacked from themainframe world’s hierarchical storage man-agement (HSM)—more people seemed to behung up on determining the value of the datarather than its ideal location.

As a result, data classification became a new catch phrase, and a hand-ful of companies with classification technologies sprung up. The premisewas that you needed to know more about a piece of data than when it wascreated, when it was changed last or how big it was. All of that can be use-ful information, but you need to have some real intelligence about thatdata if you have any hope of determining its proper disposition.

People say the more you know, the better. So why not crack open yourdata files and see what’s inside? After all, you can’t tell if data should hangaround on premium platters or be shelved to some near-line system or the equivalent of storage Siberia if you don’t know its true worth. But toknow all that, you would need to get your business units involved, which is about the time ILM gets laid to rest.

But you can’t keep a good idea down, and ILM is back and being takenmore seriously than ever. Saying “ILM” in public is still a no-no, but whateveryou call it—storage tiering or simply smart storage management—it’s back.What’s different this time is that we’re focused on the problem. We’re looking

Copyright 2010, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by any means, without permission in writing from the publisher. For permissions or reprint information, please contact Mike Kelly, VP and Group Publisher ([email protected]).

editorial | rich castagna

ILM lives again!Information lifecycle management faded into oblivion

without getting serious notice. But it’s back with a new name and more realistic goals.

5

Saying “ILM” in public is still a no-no, but whateveryou call it—storagetiering or simplysmart storage management—it’s back.

Page 6: Storage Mag Online Sept Updated 92010

Storage September 20106

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

at location, the placement of data, much more closely. We’ve essentially stoppedlooking for a perfect solution long enough to consider what might be good enoughor at least expedient.

But that explanation is a little too simple; ILM is back because we have morechoices about where to put things than we did before. Solid-state storage mightbe the key catalyst for ILM’s renewal. When solid state began to trickle intoenterprise storage systems, the debate was overhow to determine what applications, if any, wereworth the incredibly high price of flash. Solid-staters said forget $/GB and think in terms of dollarsper I/O, which added a new dimension to the argument. Eventually, someone realized thatrather than parking data on solid-state storage,we should just let it hang out there for as long as it’s needed.

So the idea of moving data dynamically andautomatically came into play. Forget about crack-ing files or indexing content; let’s just see howoften and how fast the data is needed. Not everycompany has the app or the bucks to add a prettyexpensive tier at the top of the storage triangle,but the same principles could be used to movedata around from, say, SAS to SATA. There mightnot be sophisticated data classification going onbehind the curtain, but it’s a practical solution.

Cloud storage, being taken more and more seriously by enterprises every day,tosses yet another tier into the mix. And clever startups like StorSimple andNasuni have built appliances that almost seamlessly integrate the cloud withdata center storage.

And now that LTO-5 is here, tape is suddenly cool again. LTO-5’s 3 TB capacityand 240 MBps throughput (both with compression) definitely reinforce tape’sstatus as a bona fide storage tier.

If your storage vendor doesn’t offer some form of automated data movement,ask when it will. Just as thin provisioning is already entrenched in most enter-prise storage systems, and the way data reduction is moving along that sameroute, automated tiering will become a basic part of a storage vendor’s systemmanagement set. If it isn’t, then you might want to consider another vendor. 2

Rich Castagna ([email protected]) is editorial director of the StorageMedia Group.

* Click here for a sneak peek at what’s coming up in the October 2010 issue.

Not every companyhas the app or thebucks to add a pretty expensive tier at the top of thestorage triangle, butthe same principlescould be used tomove data aroundfrom, say, SAS toSATA.

Page 7: Storage Mag Online Sept Updated 92010

NEXT GENERATION

BACKUPSTARTS NOW

It’s about disk. It’s about networks. It’s about time. EMC is the leader in disk-based backup and recovery.

Learn more now. www.EMC.com/products/category/backup-recovery.htm

EMC2, EMC, and where information lives are registered trademarks or trademarks of EMC Corporation in the United States and other countries. © Copyright 2010 EMC Corporation. All rights reserved.

Page 8: Storage Mag Online Sept Updated 92010

Storage September 20108

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

tEN YEARS AGO, 10 TB was considered a large storage environment. Nowit’s common to have hundreds of terabytes and there are even environ-ments with petabytes of storage in the double-digit range. It’s safe to assume that data storage capacity growth will continue over the next 10years as storage environments measured in exabytes begin to emergeand, over time, become mainstream. I actually talked to one customerwho claimed they would have an exabyte of data in the next three years.

Having that much physical storage in the data center is ultimately un-tenable. So how do we solve the problem? A big part of the answer will beprovided through a number of technologies. Hard disk drives will continueto become denser. Higher capacity disk driveshave the ability to store more data within thesame given physical space. However, fatter diskdrives impact application performance. There-fore, intelligent tiering that enables demotionand promotion of active and inactive data between fast and dense storage tiers will balance performance and capacity.

Storage optimization technologies such asthin provisioning provide a better way to use the capacity you already havewithin your storage systems. Storage systems that use traditional provi-sioning methods typically have 50% to 70% of their capacity allocated butunused. Users who implement thin provisioning have a much higher utiliza-tion rate. If you can reduce allocated but unused capacity to 20%, it willyield significant savings in a petabyte world. For an environment with 1 PBof storage, implementing thin provisioning could result in 300 TB to 500 TBof capacity being saved. If you have 10 PB, then we’re talking a savings of 3 PB to 5 PB.

These are great leaps, and I submit that another major leap will be datareduction (data deduplication) for primary storage. The math is simple andthe value proposition is a no-brainer. Even moderate dedupe is economi-cally attractive. If your data is consuming 100 TB of disk space and you’re

StorWars | tony asaro

Primary storage dedupe:Requisite for the future

Tools like automated tiering and thin provisioning help users cope with capacity demands; but more drastic

measures, like primary storage data reduction, are needed.

Having that muchphysical storage in the data center is ultimately untenable.

Page 9: Storage Mag Online Sept Updated 92010

Storage September 2010

able to cut that in half, you would reclaim 50 TB of capacity. That’s a fairlymodest 2:1 ratio, which should be easily achievable. If you were able toget a 5:1 ratio, you’re talking approximately 80 TB of reclaimed capacity. If we consider a petabyte data center, you can save 500 TB on the conser-vative side (a 2:1 reduction ratio), and 800 TB if you’re more optimistic (5:1ratio). For 10 PB of data, the result could be a capacity savings of up to 8 PB.

The savings are staggering when you consider just the capital costs,but it also drives down your maintenance costs. When you factor in the impact on operations and people resources, the value proposition becomeseven more compelling. And if you add all of that to power, cooling and floor spacesavings, primary dedupe can completelychange the IT landscape.

You would expect every storage systemvendor to have deployed primary deduplica-tion by now, but there are some significantissues to overcome, such as:

• There’s a potential performance impact, which is a no-no in storage.

• Primary dedupe may require more internal resources (e.g., memory and CPU). In some cases, it isn’t simply just a question of addingmore because of design limitations.

• Even if there are no physical resource issues, some storage systemsmay require architectural changes to support deduplication. This could take years or may not even be possible in some cases.

• Regardless of what anyone tells you, primary dedupe is complex technology that’s typically not a core competency for most vendors.

• If something goes wrong, the risk—losing data forever—is high, so vendors are cautious.

There are two storage system vendors that provide primary dedupetoday. While both vendors have modest adoption, it certainly isn’t exten-sive. The reason for this is that their deduplication products have distinctlimitations in terms of scalability and performance. However, we’re on thethreshold of more and better products coming to market. You’ll see an-nouncements later this year and in 2011, and it will grow from there.

Data dedupe is a form of virtualization and I believe it will become asubiquitous as server virtualization within all tiers of data storage. Theamount of storage we have today and the growth over the next decade is a pervasive problem that has to be solved. The reality is that we can’tjust keep throwing money at the problem. 2

Tony Asaro is senior analyst and founder of Voices of IT (www.VoicesofIT.com).

9

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

The savings arestaggering whenyou consider just the capitalcosts, but it also drives down your maintenance costs.

Page 10: Storage Mag Online Sept Updated 92010

Weather The Storm With AdvizeX Technologies

Don’t let the storm of factors forcing you to evaluatethe viability of your storage infrastructure

AdvizeX Technologies

leave you stranded

AdvizeX has the technical information and know how you need. 

Creative IT Solutions3 to 1 Technical to Sales StaffNo Cost Assessments Available

Contact us today for your no feeassessmentassessment

800.366.6096 | www.advizex.com

Page 11: Storage Mag Online Sept Updated 92010

Storage September 201011

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

TechTarget Storage Media Group

STORAGECOMING IN OCTOBER

And don’t miss our monthly columns and commentary, or the results of our Snapshot reader survey.

Integrating Cloud andTraditional Backup AppsCloud storage services aremaking some inroads intoenterprise storage opera-tions, especially for backupand disaster recovery (DR).As standalone services,cloud backup and cloud DR can be useful, but formany firms, they would befar more attractive alterna-tives if they could be inte-grated with more traditionalbackup methods. We’ll lookat how some backup andDR app vendors are workingto integrate cloud storageservices.

10 Tips: ManagingStorage for VirtualServers and DesktopsVirtualization on the serverand desktop side of ITshops has provided a rela-tively easy way to reducephysical systems. But thetechnology has introducedproblems for storage managers who need toeffectively configure theirresources to meet theneeds of a consolidatedinfrastructure. We offer the top practical tips formanaging storage in virtu-alized server and desktopenvironments.

What Storage Managers Are BuyingOver the last eight years,Storage magazine andSearchStorage.com havefielded the twice-yearlyStorage PurchasingIntentions survey to deter-mine the purchasing plansof storage professionals forthe current and upcomingyears. This article reports on and analyzes the resultsof the latest edition of thesurvey, and provides insightinto emerging trends. Findout what technologies yourpeers are interested in forthe coming year.

Vice President of EditorialMark Schlack

Editorial DirectorRich Castagna

Senior Managing EditorKim Hefner

Senior EditorEllen O’Brien

Creative DirectorMaureen Joyce

Contributing EditorsTony AsaroJames Damoulakis Steve Duplessie Jacob Gsoedl

Storage magazineSubscriptions:www.SearchStorage.com

Site EditorEllen O’Brien

Senior News DirectorDave Raffo

Senior News WriterSonia Lelii

Features WriterCarol Sliwa

Senior Managing EditorKim Hefner

Associate Site EditorMegan Kellett

Editorial AssistantDavid Schneider

Site Editor Andrew Burton

Managing EditorHeather Darcy

Features WriterTodd Erickson

Executive Editor andIndependent Backup ExpertW. Curtis Preston

Site EditorSusan Troy

Site Editor Sue Troy

UK Bureau ChiefAntony Adshead

TechTarget Conferences

Director of Editorial EventsLindsay Jeanloz

Editorial Events AssociateJacquelyn Hinds

Storage magazine275 Grove Street Newton, MA [email protected]

STORAGE

Page 12: Storage Mag Online Sept Updated 92010

© 2010 Sony Electronics Inc. All rights reserved. Reproduction in whole or in part without written permission is prohibited. Features and specifications are subject to change without notice. Sony, the Sony logo and make.believe are trademarks of Sony. LTO, the LTO logo, Ultrium and the Ultrium logo are trademarks of HP, IBM and Quantum.

Visit sony.com/LTO

Your entire enterprise depends on the integrity of your data. Choose Sony LTO

Ultrium™ 5 cartridges, backed by our 60 years of magnetic tape expertise.

The LTO specifications don’t require our A3MP magnetic particles, high-tech

chemical binder and high-strength base film. But your data deserves no less.

All LTO storage tapes are the sameAll LTO storage tapes are the sameAll LTO storage tapes are the sameAlll LTO storage tapes arr tha sameAwll LTOW strgee tayps r tha saymeAlll strgg lapees hare da sssaymezA!! sstg p lape zz h@r da same zz@l!l *!467 b$©kup t%yp£ arr® saymwllzz eLtho...b......ddfe7.................................................................................................................................................................................................................. Sure, all LTO storage tapes are the same. Except when they fail.

Page 13: Storage Mag Online Sept Updated 92010

Storage September 201013

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

uNSTRUCTURED DATA is growing at an unprecedented rate in all industries andhas become one of the top challenges for IT departments. Market datafrom a variety of analyst and research firms shows a congruent picture:In most companies, the amount of unstructured data (file based) out-strips structured data; it’s spread across the enterprise; and it tends toreside on a motley assortment of isolated file stores that range from fileservers to network-attached storage (NAS). Management pain points havereached a critical level and associated costs are skyrocketing.

Virtualizing NASCompanies of all sizes are being inundated withunstructured data that’sstraining the limits of traditional file storage. File virtualization can pool those strained resources and provide for future growth.

By Jacob Gsoedl

Page 14: Storage Mag Online Sept Updated 92010

Storage September 2010

How we ended up in this dilemma is well understood. On the onehand, we have the simplicity of implementing unstructured data storesvia Windows and Linux file servers with directly attached and storage-area network (SAN) storage; on the other hand, we have traditionalNAS systems that are based on scale-up architectures with inherentlimitations to scale. For instance, until NetApp released Ontap 8 itlacked advanced clusteringand a global namespace; theonly way to extend beyond a single NetApp filer was tobuy a larger filer or deployanother one running inde-pendently from already installed systems.

The data storage industryis keenly aware of the situa-tion, and vendors have takendifferent approaches to pro-vide file system and NAS virtualization products thathelp to overcome the chal-lenge at hand. Even thoughprogress has been made,adoption has been tepid. “Ithas taken almost 10 years forblock-based virtualization totake place,” said Greg Schulz,founder and senior analyst at Stillwater, Minn.-basedStorageIO Group. “NAS virtual-ization is still in an earlystage and it will take time for it to be widely adopted.”

FOUR WAYS TO VIRTUALIZE FILE ACCESSVirtualizing file access by putting a logical layer be-tween back-end file storesand clients, and providing aglobal namespace is clearlythe most promising approachto tackling the unstructureddata challenge. It’s akin toblock-based storage virtual-

14

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

NAS virtualization

Namespace is the organization and presenta-tion of file-system data, such as directorystructure and files.

In a non-shared namespace, file-systeminformation is confined to a single physicalmachine and not shared with others. Traditionalscale-up NAS and server-based file storesare examples of products with non-sharednamespaces.

Conversely, a shared namespace, alsoreferred to as a global namespace, combinesthe namespace of multiple physical machinesor nodes into a single namespace. It can beimplemented by aggregating the namespacesof multiple machines and presenting them asa single federated namespace, as is usuallythe case in file-system virtualization and clustered NAS products, or it can be achievedvia clustered file systems where a single filesystem spreads across multiple physical nodes.

Scale-up NAS is a file-based storage system that scales by replacing hardware withfaster components, such as faster CPUs, morememory and more disks. Its namespace spansone or two nodes clustered for high availability.

Scale-out NAS is a file-based storage system that provides scaling by adding nodesto the cluster. Available in N+1 (single redundantnode) or N+M (each node has a redundant node)high-availability configurations, they provide anamespace that spans multiple nodes, allow-ing access to data throughout all nodes in thenamespace.

terminology

Page 15: Storage Mag Online Sept Updated 92010

Storage September 2010

ization, however, there isn’t a single method of implementing file-accessvirtualization. Instead, we have several architectural approaches competingfor a potentially lucrative file-access virtualization market.

1. File-system virtualization (aggregation) is one way of virtualizing file access. At a high level, file-system virtualizationaccumulates individual file systems into a pool that’s accessed

by clients as a single unit. In other words, clients see a single large name-space without being aware of the underlying file stores. The underlyingfile store could be a single NAS, or amesh of various file servers and NASsystems. File-system virtualization products address two main problems:They give users a single virtual file store;and they offer storage management capabilities such as nondisruptive datamigration and file-path persistencywhile files are moved between differentphysical file stores.

One of the great benefits of file-system virtualization is that it can be deployed in existing environments with-out having to rip out existing servers andNAS storage. On the downside, file-system aggregation doesn’t addressthe problem of having to manage each file store individually.

2.Clustered file systems are another way of virtualizing fileaccess. Clustered file systems are part of next-generationNAS systems designed to overcome the limitations of tradi-

tional scale-up NAS. They’re usually composed of block-based storagenodes, typically starting with three nodes and scaling to petabytes of filestorage by simply adding additional nodes. The clustered file systemglues the nodes together by presenting a single file system with a singleglobal namespace to clients. Among the vendors offering NAS systemsbased on clustered file systems are FalconStor Software Inc.’s HyperFS,Hewlett-Packard (HP) Co.’s StorageWorks X9000 Network Storage Systems,IBM’s Scale Out Network Attached Storage (SONAS), Isilon Systems Inc.,Oracle Corp.’s Sun Storage 7000 Unified Series, Panasas Inc., QuantumCorp.’s StorNext and Symantec Corp.’s FileStore.

3.Clustered NAS is a third way of virtualizing file access. Clus-tered NAS architectures share many of the benefits of clus-tered file-system-based NAS. Instead of running a single file

system that spreads across all nodes, clustered NAS systems run com-plete file systems on each node, aggregating them under a single rootand presenting them as a single global namespace to connected clients.In a sense, clustered NAS is a combination of a scale-out, multi-nodestorage architecture and file-system aggregation. Instead of aggregating

15

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

One of the great benefits of file-system virtualization is that it can be deployed inexisting environmentswithout having to ripout existing serversand NAS storage.

Page 16: Storage Mag Online Sept Updated 92010

Storage September 2010

file systems of heterogeneous file stores, they aggregate file systems on native storage nodes. The BlueArc Corp. Titan and Mercury series ofscale-out NAS systems are prime examples of clustered NAS systems.

4.NAS gateways can also be viewed as file-system virtualiza-tion devices. Sitting in front of block-based storage, they provide NFS and CIFS access to the block-based storage they

front end. Offered by most NAS vendors, they usually allow bringingthird-party, block-based storage into the NAS and, if supported by theNAS vendor, into the global namespace.

NAS systems and gateways based on clustered file system or clus-tered NAS architectures are next-generation NAS systems and won’tintegrate with existing legacy file stores; they usually replace them orrun in parallel with them. This makes them more difficult to deploy aswell as more expensive than file-system virtualization products. However,the benefit of having to manage a single NAS, rather than many smalldata silos that are simply aggregated by a file-system virtualizationproduct into a single namespace, more often than not justifies the additional effort and cost.

FILE-SYSTEM VIRTUALIZATION USE CASES AND SELECTION CRITERIABecause ripping out existing file stores and replacing them with ascale-out NAS isn’t an option in many situations, file-system virtual-

16

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

OPEN SOURCE NAS VIRTUALIZATION

NAS virtualization products are also available as open source software. For instance,the Apache Hadoop Distributed File System (HDFS) handles distribution andredundancy of files, and enables logical files that far exceed the size of any onedata storage device. HDFS is designed for commodity hardware and supports any-where from a few nodes to thousands of nodes. Another example of an open sourcefile system is the Gluster clustered file system for building a scalable NAS with asingle global namespace.

Instead of spending a lot of money for traditional NAS systems, an open sourcefile system running on inexpensive hardware components seems like a good alter-native. But open source file systems are usually not a good choice for the enter-prise. They require significant tuning and maintenance efforts, as well as expertsintimately familiar with the intricacies of the chosen software, and they don’t comewith the support that traditional NAS vendors offer. Availability, reliability, perform-ance and support come first for enterprise storage, and these attributes are diffi-cult to achieve with open source software. Open source file systems are a greatchoice for cloud storage providers and companies that have to make money onstorage, as well as for the research and educational sector, but they’re usually not the product of choice in the enterprise.

Page 17: Storage Mag Online Sept Updated 92010

Storage September 2010

ization products that aggregate the various file stores into a singleglobal namespace can be viewed as complementary to scale-out andtraditional NAS systems, especially during the extended time of transi-tioning from legacy file stores. “Many customers buy a NAS to get fea-tures like replication, archiving and snapshots, but they don’t requirethese for all files,” said Brian Gladstein,vice president (VP) of marketing at AutoVirt Inc. “We give them the abilityto mix existing low-end file stores withfast filers and provide them with a singlenamespace.”

Even in companies that can central-ize their unstructured data onto a NASwith global namespace support, therewill likely always be some storage silosthat live outside the NAS. It could be departmental data or data that’sdeemed unworthy to reside on com-paratively expensive NAS storage. File-system virtualization products allowcombining rogue file stores with NAS devices into a global namespace.A second use case for file-system aggregation is data migration. Acqui-sitions, storage infrastructure upgrades and data relocation projectsare among the reasons for migrating data from one physical locationto another. Because file-system aggregation products virtualize accessto heterogeneous file stores, they’re also simple yet effective datamigration solutions. Another use case for file-system aggregation isautomated storage tiering. Equipped with policy engines for definingdata migration rules based on file-system metadata—such as last access date, file size and file type—they enable automatic data move-ment to suitable storage tiers based on defined policies.

File-system virtualization products are available as appliances andsoftware-only products. A software-only product offers the benefits of more flexible deployment on hardware of your choice, and the productsusually have a lower degree of vendor lock-in. Conversely, appliance-based file-system virtualization products come in a proven, perform-ance-optimized package and, because hardware and software are provided by the same vendor, there’s less risk of finger pointing.

When comparing file-system virtualization products, the level at whichvirtualization occurs is a relevant evaluation criteria. For instance, while Microsoft Distributed File System (DFS) provides share-level virtualization, a product like F5 Networks Inc.’s ARX series provides file-level virtualization.

Intrusiveness and ease of deployment are also relevant character-istics to consider during a product evaluation. Ideally, a file-systemvirtualization product should require minimal client changes and the

17

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

Even in companiesthat can centralizetheir unstructureddata onto a NAS withglobal namespacesupport, there willlikely always be somestorage silos that liveoutside the NAS.

Page 18: Storage Mag Online Sept Updated 92010

Storage September 2010

virtualized data on the back-end file stores shouldn’t be changed.File-system support must also be considered. While some systems

support only CIFS, products like F5’s ARX and EMC Corp.’s Rainfinitysupport CIFS and NFS, which is relevant in environments with bothWindows and Linux file stores. The presence of a policy engine and its capabilities are critical if the product’s intended use is for data mobility and automated storage tiering.

FILE-SYSTEM VIRTUALIZATION PRODUCT SAMPLERFile-system virtualization products are offered by a number of vendors,each coming from a different background with varying objectives.

AutoVirt File Virtualization software: Like Microsoft DFS, AutoVirt is a software-only product that runs on Windows servers.

The AutoVirt global namespace uses the CIFS protocol to interactwith file servers, clients and DNS. When a client requests a file, DNSfacilitates resolution to the appropriate storage device. The global name-space acts as an intermediary between client and DNS. With the AutoVirtglobal namespace in place, client shortcuts refer to the namespace. Thenamespace is the authority on the location of networked files and providesthe final storage referral with the help of DNS.

AutoVirt can be introduced nondisruptively to clients, without theneed to make any changes on clients, by populating the AutoVirtnamespace server with the shares of existing file stores. Although itcan be done manually, a data discovery service automates discoveryof existing file stores and populates the AutoVirt namespace serverwith metadata. This differs from Microsoft DFS, which requires clientsto be configured with the new DFS shares, rather than continuing touse existing file shares.

Also contrary to Microsoft DFS, AutoVirt provides a policy engine thatenables rule-based data mobility across the environment to migrate,consolidate, replicate and tier data without affecting end-user access to networked files. Currently available for CIFS, AutoVirt plans to releasea version for NFS by year’s end.

EMC Rainfinity file virtualization appliance: Rainfinity is a familyof file-system virtualization products that virtualize access to unstruc-tured data, and provide data mobility and file tiering services. The Rain-finity Global Namespace Appliance provides a single mount point forclients and applications; the Rainfinity File Management Appliance delivers policy-based management to automate relocation of files todifferent storage tiers; and the Rainfinity File Virtualization Applianceprovides nondisruptive data movement.

Contrary to F5’s ARX, the Rainfinity File Virtualization Appliance archi-tecture is designed to switch between in-band and out-of-band opera-tions as needed. The appliance is out-of-band most of the time, and dataflows between client systems and back-end file stores directly. It sits

18

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

Page 19: Storage Mag Online Sept Updated 92010

Storage September 2010

outside the data path until a migration is required and then switches to in-band operation.

F5 ARX Series: Acquired from Acopia in 2007 and rebranded as F5ARX, the F5 ARX series is an inline file-system virtualization appliance.Usually deployed as an active-passive cluster, its located betweenCIFS/NFS clients and heterogeneous CIFS/NFS file stores, presenting virtualized CIFS and NFS file systems to clients. Unstructured data ispresented in a global virtualized namespace. Built like a networkswitch, it’s available with 2 Gbps ports (ARX500), 12 Gbps ports(ARX2000) and 12 Gbps ports plus two 10 Gbps ports (ARX4000).

With a focus on data mobility andstorage tiering, F5’s ARX comes withstrong data mobility and automatedstorage tiering features. Orchestratedby a policy engine, it performs bidirec-tional data movements between differ-ent tiers of heterogeneous storage inreal-time and transparently to users.Similar to AutoVirt, policies are basedon file metadata, such as last-accesseddate, creation date, file size and filetype.

The fact that F5 ARX is an appliance allows it to provide a performance-optimized product that’s hard to match by a software-only solution.Built on a split-path architecture, it has both a data path that passesdata straight through the device for tasks that don’t involve policies,and a control path for anything that requires policies. “We are a DFSon steroids,” said Renny Shen, product marketing manager at F5.“While DFS gives you share-level virtualization, we give you file-levelvirtualization.”

Microsoft DFS: Microsoft DFS is a set of client and server servicesthat allow an organization using Microsoft Windows servers to organizedistributed CIFS file shares into a distributed file system. DFS provideslocation transparency and redundancy to improve data availability incase of failure or heavy load by allowing shares in multiple locations to be logically grouped under a single DFS root.

DFS supports the replication of data between servers using FileReplication Service (FRS) in server versions up to Windows Server2003, and DFS Replication (DFSR) in Server 2003 R2, Server 2008 andlater versions.

Microsoft DFS supports only Windows CIFS shares and has no provision for bringing NFS or NAS shares into the DFS global namespace.Furthermore, it lacks a policy engine that would enable intelligent datamovements. As part of Windows Server, it’s free and a good option forcompanies whose file stores reside mainly on Windows servers.

19

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

With a focus on datamobility and storagetiering, F5’s ARXcomes with strongdata mobility andautomated storagetiering features.

Page 20: Storage Mag Online Sept Updated 92010

Storage September 2010

FILE VIRTUALIZATION OUTLOOKAccess to unstructured data hasn’t changed much in the past 15 years,but big changes are happening now. NAS system architectures aremoving toward more scalable, multi-node scale-out architectureswith global namespace support. NAS behemoth NetApp finally incor-porating technology acquired from Spinnaker in its Ontap 8 release,enabling customers to build multi-node NetApp clusters, is indicativeof the change.

File-system virtualization products are complementing traditionalscale-up and next-generation scale-out NAS systems to provide a globalnamespace across heterogeneous file stores in the enterprise. Whilethey’re currently mostly deployed for the purpose of data mobility andstorage tiering, they’re likely to play a significant role in the future inproviding an enterprise-wide, global unified namespace for all unstructured data. 2

Jacob Gsoedl is a freelance writer and a corporate director for businesssystems. He can be reached at [email protected].

20

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

Page 21: Storage Mag Online Sept Updated 92010

eStorage September 201021

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

VERY SO OFTEN there’s a moment of calm in a data storage manager’s lifewhere nothing is broken and there aren’t any fires to put out. As rarely asthese times might occur, the momentary calm should be taken advantageof rather than savored. This is your opportunity to get some of the kinks outof your storage network so you can eliminate the next emergency before it happens or just be better prepared when it does. We spoke with expertsfrom storage networking vendors—Brocade, Cisco, Emulex and Virtual In-struments—to discuss what storage managers should do to prepare theirstorage networks for the future and to maximize their investments.

The first few tips that follow have more to do with being prepared thanactually tinkering with your storage-area network (SAN), but all of our expertsagreed that trying to fine-tune a SAN without adequate preparation is likedriving down a freeway without headlights. Before you can roll up yoursleeves and get under the hood, you have to do some preparation. The restof our tips go into more detail, describing specific steps (often at no cost)that you can take to improve SAN performance, efficiency and resiliency.

TOP 10 TIPS FOR TUNING YOUR

STORAGE NETWORK

TOP 10 TIPS FOR TUNING YOUR

STORAGE NETWORKStorage performance issues are often not related

to the storage system at all, but rather to the storage network that links servers to disk arrays. These

10 tips will help you find and fix the bottlenecks in your storage network infrastructure. By George Crump

Page 22: Storage Mag Online Sept Updated 92010

Storage September 2010

Tip 1. Know what you haveThe No. 1 recommendation in fine-tuning your storage network is to firstknow what you have in the environment. If you have a problem and needto bring in your vendor’s tech experts, the first thing they’re going towant is an inventory of your networking environment. If you do the inventory ahead of time, you’ll likely pay less for any necessary profes-sional services and it may even help you avoid having to engage themin the first place.

It’s important to document eachhost bus adapter (HBA), cable andswitch in the environment whilenoting how they’re interconnected.You should also record the speedsthey’re actually set at, and the versions of the software or driversthey’re running. While all of thismay seem painfully obvious, an inventory of what the storage network consists of and how it’s configured is the type of documentthat can quickly fall off the priority list during the urgencies of a typicalIT workweek. Taking time to level set and understand what’s in the environment, and how it has changed, is critical.

Documenting this information may even pinpoint some areas thatare ripe for fine-tuning. We’ve seen cases where over the course of timeusers have upgraded to 4 Gb Fibre Channel (FC) and, for some reason,their inter-switch links (ISLs) were still set at 1 Gb. A simple change to the switch configurations effectively doubled their performance. If theyhadn’t taken the time to do an inventory, this obvious mistake may neverhave come to light.

This could be a zero-cost tip because the information can be capturedand stored in spreadsheets. While manually keeping track of this informa-tion is possible, in today’s rapidly changing, dynamic data center it’s be-coming a less practical approach. Storage environments change fast andIT staffs are typically stretched thin, so manually maintaining an infra-structure isn’t realistic. Vendors we spoke to, and many others, have soft-ware and hardware tools that can capture this information automatically.

Of course, those tools aren’t free or as cheap as a spreadsheet. Butif you weigh their cost against the cost of manually capturing the data,or the cost of missing an important change to the network environment,it can be a good investment. Automated storage resource management(SRM) tools also vary in the data they capture and the level at whichthey capture it. Many simply poll devices and record status data, whileothers tap the physical layer and analyze network frames.

22

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

It’s important to documenteach host bus adapter(HBA), cable and switch in the environment whilenoting how they’re interconnected.

Page 23: Storage Mag Online Sept Updated 92010

Storage September 2010

Tip 2. Know what’s going onAfter you’ve developed a good picture of the components in your storagenetwork infrastructure, the next step is to fully understand what thosedevices are doing at a particular moment in time. Many switch and HBAvendors build some of these capabilities into their products. But insteadof going to each device to see its view of traffic conditions, it may bebetter to find a tool that can provide consolidated real-time feedback on how data is traversing your network. There are software solutionsand physical layer access tools thatcan report on the infrastructure traffic.The tools that can monitor networkdevices specifically are important because, as all of our experts pointedout, there are situations where oper-ating systems or applications reportinaccurate information when com-pared to what the device is reporting.

These tools can be used for trend analysis and, in some cases, theycan simulate an upcoming occurrence of a data storage infrastructureproblem. For example, if an ISL is seeing a steady increase in traffic(see Tip 6), the ability to trend that traffic growth will help identify howsoon an application rebalance or an increase in ISL bandwidth will berequired. Other tools will report on CRC or packet errors to ports, whichcan indicate an upcoming SFP failure.

Tip 3. Know what you want to doWith your inventory complete and good visibility into your SAN estab-lished, the next step is to figure out what network changes will providethe most benefit to the organization. You may have discovered SAN features that need to be enabled, or perhaps you have new applicationsor an accelerated rollout of current initiatives that need to be planned.Knowing how activities such as those will impact the rest of the envi-ronment and what role the storage infrastructure has to play in thosetasks is critical. Generally, the goals come down to increasing reliabilityor performance, but they may also be to reduce costs.

Tip 4. Limit the impactWhen you feel you’re at the stage where you’re ready to make changesto the environment, the next step is to limit the sphere of impact asmuch as possible by subdividing the SAN into virtual SANs (VSANs). Subdividing (in a worst-case scenario) changes made to the environ-ment that yield unexpected results, like preventing a server from accessing storage or even causing an outage, will have limited reper-cussions across the infrastructure. Limiting the sphere of impact is by

23

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

There are software solutions and physicallayer access tools that can report on the infrastructure traffic.

Page 24: Storage Mag Online Sept Updated 92010

Storage September 2010

itself an important fine-tuning step that will help create an environmentthat’s more resilient to changes in the future, and can help containproblems. For example, an application may suddenly need an excessiveamount of storage resources; subdividing the SAN will help contain itand keep the rest of the infrastructure from being starved. This aspectof fine-tuning shouldn’t require any new purchases as it’s a setup andconfiguration process.

Tip 5. Test to learn, learn to testAlthough it may seem to be something of a luxury, one key to fine-tuningis to have a permanent testing lab that can be used to try out proposedchanges to the environment or to simulate failed conditions. Lab testinglets you explore the alternatives and develop remedies without impactingthe production network. In speaking with our experts, and in our own experience, most SAN emergencies result from implementing a new feature in the storage array or on the SAN. If you lack the resources to create a lab environment, an alternative may be to work with your infrastructure vendors, as many have facilities that can be used torecreate problems or to test the implementation of new features.

Storage I/O performance is typically high on a fine-tuning top 10 list,and although it didn’t make it into our top five tips, it rounds out therest of the list. Before performance issues are tackled, it’s importantthat the environment be documented, understood and made as resilientas possible. While slow response time due to lack of performance tuningis a concern, zero response time because of poor planning is a lot worse.

Tip 6. Understand how you’re using ISLsISLs (interconnects between switches) are critical areas for tuning, andas a storage-area network grows, they become increasingly importantto performance. The art of fine-tuning an ISL is often an area where different vendors will have conflicting opinions on what a good rule of thumb is for switch fan-in configurations and the number of hopsbetween switches. The reality is that the latency between switch con-nections compared to the latency of mechanical hard drives is dramati-cally lower, even negligible; however, in high fan-in situations or wherethere are a lot of hops (servers crossing multiple switches to accessdata), ISLs play an important role.

The top concern is to ensure that ISLs are configured at the correctbandwidth between the switches, which seems to be a surprisinglycommon mistake as mentioned earlier. Beyond that, it’s important tomeasure the traffic flow between hosts and switches, and the ISL trafficbetween the switches themselves. Switch reporting tools will providemuch of this information but, as indicated earlier, a visual tool thatmeasures switch intercommunication may be preferable.

24

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

Page 25: Storage Mag Online Sept Updated 92010

Storage September 2010

Based on the traffic measurements, a determination can be made to rebalance traffic flow by adjusting which primary switch the serverconnects with, which will involve physical rewiring and potential serverdowntime. Another option is to add ISLs, which increases bandwidth butconsumes ports and, to some extent, further adds to the complexity ofthe storage architecture.

Tip 7. Use NPIV for virtual machinesServer virtualization has changed just about everything when configur-ing SANs and one of the biggest challenges is to identify which virtualmachines are demanding the most from the infrastructure. Before servervirtualization, a single server had a single application and communicatedto the SAN through a single HBA;now virtual hosts may have manyservers trying to communicate with the storage infrastructure allthrough the same HBA. It’s criticalto be able to identify the virtual machines that need storage I/O performance the most so that theycan be balanced across the hosts,instead of consuming all the re-sources of a single host. N_Port IDVirtualization (NPIV) is a featuresupported by some HBAs that letsyou assign each individual virtualmachine a virtual World Wide Name (WWN) that will stay associatedwith it, even through virtual machine migrations from host to host. WithNPIV, you can use your switches’ statistics to identify the most activevirtual machines from the point of view of storage and allocate them appropriately across the hosts in the environment.

Tip 8. Know thy HBA queue depthHBA queue depth is the number of pending storage I/Os that are sent to the data storage infrastructure. When installing an HBA, most storageadministrators simply use the default settings for the card, but the defaultHBA queue depth setting is typically too high. This can cause storageports to become congested, leading to application performance issues.If queue depth is set too low, the ports and the SAN infrastructure itselfaren’t used efficiently. When a storage system isn’t loaded with enoughpending I/Os, it doesn’t get the opportunity to use its cache; if essen-tially everything expires out of cache before it can be accessed, themajority of accesses will then be coming from disk. Most HBAs set thedefault queue depth between 32 to 256, but the optimal range is actually

25

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

Server virtualization has changed just abouteverything when configuringSANs and one of the biggestchallenges is to identifywhich virtual machinesare demanding the mostfrom the infrastructure.

Page 26: Storage Mag Online Sept Updated 92010

Storage September 2010

closer to 2 to 8. Most initiators can report on the number of pending requests in their queues at any given time, which allows you to strike a balance between too much and not enough queue depth.

Tip 9. Multipath verificationMultipath verification involves ensuring that I/O traffic has been distributed across redundant paths. In many environments, our expertssaid they found multipathing isn’t working at all or that the load isn’tbalanced across the available paths. For example, if you have one pathcarrying 80% of its capacity and the other path only 3%, it can affectavailability if an HBA or its connection fails, or it can impact applicationperformance. The goal should be to ensure that traffic is balanced fairlyevenly across all available HBA ports and ISLs.

You can use switch reports for multipath verification. To do this, runa report with the port WWNs, the port name and the MBps sorted bythe port name combined with a filter for an attached device type equalto “server.” This is a quick way to identify which links have balancedmultipaths, which ones are currently acting as active/passive andwhich ones don’t have an active redundant HBA.

Tip 10. Improve replication and backupperformanceWhile some environments have critical concerns over the performanceof a database application, almost all of them need to decrease theamount of time it takes to perform backups or replication functions.Both of these processes are challenged by rapidly growing data setsthat need to be replicated across relatively narrow bandwidth connec-tions and ever-shrinking backup windows. They’re also the most likelyprocesses to put a continuous load across multiple segments withinthe SAN infrastructure. The backup server is the most likely candidateto receive data that has to hop across switches or zones to get to it.

All of the above tips apply doubly to backup performance. Also con-sider adding extra HBAs to the backup server and have ports routed tospecific switches within the environment to minimize ISL traffic. 2

George Crump is president and founder of Storage Switzerland, an IT analystfirm focused on the storage and virtualization segments.

26

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

Page 27: Storage Mag Online Sept Updated 92010

MIDRANGE STORAGE ARRAY vendors seem to bedoing a lot of things right. On our latest QualityAwards survey of midrange arrays, all of thefinalists scored so well that it’s hard to findfault with any of their product lines. This isthe fifth time we’ve canvassed users aboutthe service and reliability of their midrange

storage arrays, and these users showed thehighest level of satisfaction with this product

class to date. With a record overall score of 7.12for midrange systems, Compellent TechnologiesInc. has regained the crown that Dell Inc. snaggedin our previous survey.

Compellent’s win was earned with a consistentperformance, garnering the top scores in all fiveof our rating categories—sales-force compe-tence, initial product quality, product features,product reliability and technical support. In

winning all of the categories, the company’sStorage Center products picked up

ratings greater than 7.00 in four ofthem; the ratings range from

1.00 to 8.00.

Storage September 201027

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

QUALITY AWARDS V: Compellent regains

top midrange arrays spotThe fifth edition of our service and reliability

survey for midrange arrays shows that users of midrange storage systems are pretty darned satisfied with their purchases.

By Rich Castagna

Page 28: Storage Mag Online Sept Updated 92010

Storage September 2010

MIDRANGE MIGHTBut even with such an excellent showing, Compellent must still shareat least of little of the Quality Awards spotlight. While not seriouslychallenging Compellent’s overall score of 7.12, all eight finalists finishedwith overall scores higher than 6.00—the first time that has happenedfor midrange arrays and a rarity for any Quality Awards survey.

Hewlett-Packard (HP) Co. rode its EVA and P4000 lines to a strongsecond-place finish, scoring an overall 6.73, with Hitachi Data Systemsjust a shade behind at 6.67. But a strong 1-2-3 finish isn’t the end of thestory, as the rest of the group was bunched closely behind the leaderswith ratings ranging from NetApp’s 6.59 to Oracle Corp.’s (Sun) 6.38. Fiveof the eight vendors had scores higher than the winning overall score inour last survey.

SHARPER SALES TEAMS On most Quality Awards surveys, the lowest user ratings typically appearin the sales-force competence category, as was the case on the mostrecent midrange array survey. With tighter budgets and often urgentneeds, storage managers expect sales reps and their support teams to be responsive and well-informed. Just a few years ago on our secondmidrange survey, the overall average for the sales category was a tepid5.28, indicating that users’ expectations were likely met but rarely exceeded. This time around, the category average is 6.47, suggesting

strong—and probably effective—efforts by vendors’ sales forces.

Compellent picked up its firstcategory win with a 6.81, buoyedby scores of 7.00-plus for thestatements “My sales rep keepsmy interests foremost” and “Thevendor’s sales support team isknowledgeable.” In all, Compellentcame out on top for four of the six category statements. HP didn’tlead for any single statement, butlined up ratings running from 6.45to 6.96 to finish second with a category average of 6.67. Third-place finisher Hitachi (6.59) washigh scorer for two of the categorystatements (“My sales rep isknowledgeable about my industry”and “My sales rep understands mybusiness”).

28

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

ABOUT THE SURVEYThe Storage magazine/SearchStorage.comQuality Awards are designed to identify andrecognize products that have proven theirquality and reliability in actual use. The results are derived from a survey of quali-fied readers who assess products in fivemain categories: sales-force competence,initial product quality, product features,product reliability and technical support.Our methodology incorporates statisticallyvalid polling that eliminates market shareas a factor. Indeed, our objective is to iden-tify the most reliable products on the mar-ket regardless of vendor name, reputationor size. Products are rated on a scale of 1.00to 8.00, where 8.00 is the best score. A totalof 315 respondents provided 497 midrangestorage array evaluations.

Page 29: Storage Mag Online Sept Updated 92010

Storage September 2010

Whether it’s in response to the rigors of a down economy or reaction tosurveys such as this, vendors appear to be redoubling their sales efforts.

IMPLEMENTATION AND INITIAL QUALITYA positive sales experience is a great way to start a relationship, but it can quickly be derailed if problems arise during deployment of theproduct. Here, too, in the initial product quality category, our midrangearray vendors seem to be surpassing expectations as well as their

previous performances. Ratingswere high across the board in thiscategory, with Compellent postingabove-7.00 scores for five of thesix statements en route to a 7.21category average finish. Compel-lent’s only sub-7.00 score was a6.93 for “This product was installedwithout any defects.” Hitachi’s rating of 6.94 for that statementtopped Compellent by the slimmestpossible margin.

HP nudged out Dell for the secondspot in the category (6.88 vs. 6.80),and garnered the only other 7.00-plus score (a 7.06) for the keystatement “This product is easy to use.” Not too far behind Dell, the rest of the field was packed almost too tightly to see anyspace among them; a mere .05points separated Oracle (Sun), Hitachi and IBM (tied), NetApp and EMC Corp.

And if perfection is the goal of midrange array vendors, theyappear to be well on their way, as the average score for all eightproducts for the statement “Thisproduct was installed without anydefects” was a glossy 6.81.

FEATURES AND FUNCTIONSA midrange storage system fromjust a few years ago would hardly berecognizable by today’s standards.

29

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

PRODUCTS IN THE SURVEYThe following vendors and midrange arraymodel lines were included in the QualityAwards survey. The number of responsesfor the finalists are included in parenthesesafter the product names.

• 3PAR InServ E200 or F200/F400*• Atrato Inc. Velocity 1000 (V1000)*• BlueArc Corp. Titan 2000/3000 Series,

Mercury*• Compellent Technologies Inc. Storage

Center (14)• DataDirect Networks Inc. S2A Series*• Dell Inc. CX Series or Dell EqualLogic

PS Series (47)• EMC Corp. Clariion CX Series (110)• Fujitsu Eternus DX400 Series*• Hewlett-Packard (HP) Co. StorageWorks

EVA Series and P4000 Series (76)• Hitachi Data Systems Universal Storage

Platform (USP) VM or AMS Series (35)• IBM DS4000/DS5000/DS6000 (74)• NetApp FAS200/FAS900/FAS2000 (69)• NEC Corp. D3/D4/D8 Series*• Oracle Corp. Sun 6000/7000 Series (35)• Pillar Data Systems Axiom 300/500/600*• SGI InfiniteStorage 4000/5000/6000

Series*• Xiotech Corp. Magnitude 3D or

Emprise** Received too few responses to be included among

finalists.

Page 30: Storage Mag Online Sept Updated 92010

Storage September 2010

Responding to user demands, vendors have tricked out midrange arrayswith the kinds of features and capabilities that you could only once getwith enterprise-class storage systems. Improved features—and more ofthem—equate to contented users as evidenced by record high scores inthe product features rating category.

Compellent’s 7.26 average score easily outdistanced NetApp, whichnotched a very solid 6.75 giving it a small margin over third-place finisherHP (6.68) which it turned nudged out IBM (6.61) by the same margin.Compellent’s Storage Center scored the highest on all seven categorystatements in the product features category, ranging from a 7.07 (“Thisproduct’s snapshot features meet my needs”) to a 7.36 (“This product’sremote replication features meet my needs”). By delivering these“bread-and-butter” features along with its signature Fluid Data auto-mated tiering, Compellent may be raising the bar a bit for all midrange systems vendors.

That’s not to suggest that any of the product lines are slackerswhen it comes to features. The overall average for the category was a6.64, the highest we’ve seen and substantially higher than the previousmark of 6.33. The average scores for key midrange array requirementswere high for all eight products, such as a 6.79 for “This product’s capacity scales to meet my needs,” highlighted by Hitachi’s 7.11 (theonly other 7.00-plus score in the category) and Compellent’s 7.31.

STORAGE YOU CAN COUNT ONThe true test of a storage system ishow well it performs after the show-room shine wears off. Each of the eightvendors’ product lines passed the teston this survey easily, with an across-the-board 6.74 average in the productreliability category; this is not only thehighest average ever for that category,but the highest average of any categoryto date.

Compellent again netted the highestratings for each of the five categorystatements, ranging from 7.14 to 7.36and rolling up to a 7.27 average, itshighest category score on the survey.But it wasn’t alone in “seven heaven.”Second-seeded Hitachi earned three7.00-plus scores for the statements related to meeting service-level re-quirements, having very little downtime

30

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

THE HEAVY LIFTERS

Vendor/Product

Hitachi Data Systems USPVM or AMS Series

EMC Clariion CX Series

IBMDS4000/DS5000/DS6000

Oracle Sun 6000 or 7000 Series

NetAppFAS200/FAS900/FAS2000

Hewlett-Packard EVA Seriesand P4000 Series

Dell CX Series or DellEqualLogic PS Series

Compellent Storage Center

Average installed capacity(TB)

87

70

66

60

59

53

43

41

Page 31: Storage Mag Online Sept Updated 92010

Storage September 201031

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

n Compellent Storage Center

n Dell CX Series or Dell EqualLogic PS Series

n EMC Clariion CX Series

n Hewlett-Packard StorageWorksEVA Series and P4000 Series

n Hitachi Data Systems USP VM or AMS Series

n IBM DS4000/DS5000/DS6000

n NetApp FAS200/FAS900/FAS2000

n Oracle Sun 6000/7000 Series 4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50

OVERALL RATINGS

4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50

SALES-FORCE COMPETENCE

4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50

INITIAL PRODUCT QUALITY

4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50

PRODUCT RELIABILITY

4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50

PRODUCT FEATURES

4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50

TECHNICAL SUPPORT

65% 70 75 80 85 90 95 100

WOULD YOU BUY THIS PRODUCT AGAIN?

Based on a 1.00-8.00 scoring scale MID

RA

NG

E A

RR

AYS

QUALIT Y AWARDS

STOR A G E M A G A ZINE

*% Yes

Compellent

HP

HDS

NetApp

Dell

IBM

EMC

Oracle

Compellent

HDS

HP

EMC

Dell

NetApp

IBM

Oracle

Compellent

HDS

HP

Dell

EMC

IBM

NetApp

Oracle

Compellent

NetApp

Dell

EMC

HP

HDS

IBM

Oracle

Compellent

HP

HDS

NetApp

IBM

Dell

EMC

Oracle

Compellent

NetApp

HP

IBM

HDS

Oracle

Dell

EMC

Compellent

HP

Dell

Oracle

HDS

IBM

NetApp

EMC

Page 32: Storage Mag Online Sept Updated 92010

Storage September 2010

and needing very few unplanned patches, on the way to an impressive6.97 category average. Third-place HP joined Compellent and Hitachi inthe plus-7.00 club with a 7.03 for the “unplanned patches” statement and finished with an average rating of 6.72, which put it just .01 ahead of EMC’s score of 6.71.

CONTINUING SUPPORT FOR THE PRODUCTPast Quality Award surveys for midrange arrays had scores in thetechnical support category that hovered around the lows seen in thesales-force competence category. This survey isn’t any different, withsupport getting the second-lowest overall category average. But thetwist here is that the score is still fairly high at 6.59, led once again byCompellent (7.02). Hitachi (6.71) racked up its second second-place fin-ish, with HP (6.69) hard on its heels with another strong performance.HP finished second or third in all five ratings categories.

The only statement in the support category that Compellent didn’tscore top marks on was “Vendor’s third-party partners are knowledge-able”; instead, HP and Oracle (Sun) tied for the lead with a score of 6.72.It’s a significant mark for those two vendors, as 54% of HP respondentsand 37% of Oracle respondents said they purchased their systems fromVARs.

Midrange vendors are also delivering on their support promises. One ofCompellent’s two 7.29 category scores was for “Vendor supplies supportas contractually specified,” a statement that all vendors scored well onfor a group average of 6.79 (high in the category). Well-trained supportstaffs were also recognized on the survey, with Compellent (7.07), HP(6.87) and Hitachi (6.80) all standing out for the statement “Support personnel are knowledgeable.”

DO IT AGAINIn addition to the specific statements in each rating category, weasked survey respondents a more subjective question: All things con-sidered, would you buy the product again? Over our five surveys formidrange arrays, the responses have been generally positive and verysteady, with an average of 77% to 79% saying “Yes” across all productlines. This time, the “buy again” numbers jumped, reflecting the highercategory ratings and, undoubtedly, greater satisfaction with the entireclass of midrange storage products.

Overall, 89% of respondents said they would take the plunge again withthe same product, led by Compellent’s eerily perfect 100%, NetApp and Dellboth at 94% and the rest of the field ranging from EMC’s 87% to Oracle’s(Sun) 83%. Not too shabby when it comes to satisfied customers. 2

Rich Castagna ([email protected]) is editorial director of theStorage Media Group.

32

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

Page 33: Storage Mag Online Sept Updated 92010

Storage September 201033

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

thot spots | lauren whitehouse

HE FOCUS ON backup modernization during the last few years has beensquarely on the backup target device: tapes and disks. That’s where themajority of users have made the most changes. But now that so manyusers and IT shops have become disk friendly, there’s a new focus on the front end of the backup process: the capture and transfer phase.

In 2004, nearly 60% of EnterpriseStorage Group (ESG) survey respondentsreported backing up directly to tape. By2010, only 20% were using tape exclu-sively. These days, approximately 80% of IT organizations tell ESG they’re aug-menting backup processes with disk,which helps them meet backup windowsand recovery time objectives (RTOs).Still, exponential data growth meansgreater backup demands and a need for new backup processes. As a result,technologies such as continuous data protection (CDP), replication, source-sidededuplication and snapshot are being implemented more frequently. ESGresearch found a significant uptake in several of these technologies:while the use of snapshots grew only 2% between 2008 and 2010, replicationuse increased 34%, CDP expanded by 58% and deduplication use improved66% in the same two-year period.

SNAPSHOT AND IMAGE-LEVEL BACKUPWhat if you could eliminate your backup window, accelerate system recoveryand facilitate efficient disaster recovery (DR)? Effectively, that’s what snap-shot- and image-based backup can deliver. A snapshot is a copy of a volumeor file system created at a specific point in time. Taking advantage of snap-shot functionality for backup can dramatically reduce the impact on appsby eliminating the backup window, providing RTOs of seconds to minutes andenabling better recovery point objectives (RPOs) by enabling more frequentcopies per day.

Image backup uses snapshot technology to create a point-in-time

Now that so manyusers and IT shopshave become diskfriendly, there’s a new focus on the frontend of the backupprocess: the captureand transfer phase.

Getting in front of backupLearn about a handful of key technologies that canhelp storage managers meet their backup recovery

time objectives (RTOs) by making the first steps—data capture and transfer—simpler and more efficient.

Page 34: Storage Mag Online Sept Updated 92010

Storage September 2010

image of a system (such as hardware configuration, OS, applications anddata), storing it in a single portable file. Because the recovery point is cap-tured “hot,” critical applications don’t have to be shut down during backup.This approach eliminates the backup window and enables rapid whole-systemrecovery to any system (virtual or physical), including to dissimilar hardware.Both of these methods are efficient in the capture, transfer and storage ofdata. After the initial base copy is made, only incremental blocks are capturedand stored.

CDPCDP technology continuously captures changes to data at a file, block or application level, supporting very granular data capture and recovery options.It time stamps each write and mirrors it to a continuous data protection retention log. When a recovery is needed, the CDP engine creates an imageof the volume for the point in time requested without disrupting the productionapplication.

Block-level CDP operates at the logical volume level and records everywrite. This type of continuous data protection stands out at transparent datacapture and presentation of views at different points in time. Typicallyrunning on the same server as the application it’s protecting, file-levelCDP operates at the file-system level and records any changes to the filesystem. Application-aware CDP tracks critical application process pointswithin the CDP data stream that can greatly simplify recovery, such as transaction-consistent database checkpoints or application-consistentpoints within email applications.

Continuous data protection completely eliminates discrete backups, replacing them with a transparent, continuous data capture process thatputs very low overhead on production servers. Because it captures data asit’s created, that data is immediately recoverable. This allows CDP-basedsolutions to deliver near-zero RPOs.

REPLICATIONReplication is the bedrock of these strategies and it’s increasingly beingused for data protection as a standalone process to provide operational anddisaster recovery for applications with tight RPOs or RTOs; as a method ofconsolidating distributed data for centralized file-level backup; or in con-junction with snapshot or CDP to maintain an off-site copy and facilitatedisaster recovery. Replication provides an exact mirror copy of data on a local or remote primary system that can be mounted to rapidly recoverfrom a failure. Storage capacity and bandwidth are optimized with block-level updates and network compression after the initial copy is made.

Replication is available on host systems, storage arrays or in network-based products. Typically, array- and network-based products replicate atthe block level and host-based offerings replicate at the file-system level.

34

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

Page 35: Storage Mag Online Sept Updated 92010

Storage September 2010

Host-based replication operates asynchronously, while array- and network-based replication are configurable for synchronous or asynchronous modes.Synchronous replication occurs in real-time as data is written to primarystorage; then it’s replicated on secondary storage. Asynchronous replicationoccurs in near real-time. Once data has been completely written to primarystorage, the written data is replicated on secondary storage.

SOURCE-SIDE DEDUPLICATIONDeduplication identifies and eliminates redundancy, storing only uniquedata and shortcuts to unique data for duplicates. Data deduplication’s rolein optimizing backup processes is fairly well documented; however, the focus has mostly been on target-side deduplication solutions. Source-sidededuplication ensures that only changed segments are backed up after theinitial full copy. That means significantly less data is captured, transferredand stored on disk. This reduces the time needed to perform backups. Be-cause the backup window requirements are minimal, it’s possible to backup more frequently, which increases the number of recovery points on diskstorage to meet RPO and RTO requirements.

A wholesale replacement of file-level backup is likely for many organiza-tions today, according to ESG research. For example, 55% of IT organizationssurveyed by ESG plan to replace existing file-level backup with snapshotand/or CDP solutions. That said, the integration of snapshot, replication, CDP and deduplication into existing backup platforms to augment file-levelapproaches seems to be a strong trend. That’s why several backup vendorshave made recent strides to match capture techniques to recovery objectivepolicies, simplifying implementations and optimizing the front end of backupprocesses.2

Lauren Whitehouse is a senior analyst focusing on backup and recovery soft-ware and replication solutions at Enterprise Strategy Group, Milford, Mass.

35

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

Page 36: Storage Mag Online Sept Updated 92010

Storage September 201036

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

tStorage vendorsstacking the deck

Storage vendors have been busy creating server-to-application product stacks. It looks

like the type of ploy that will give them more leverage, and take it away from you.

HERE’S A FUNDAMENTAL SHIFT of titanic proportions taking place in IT. No, I don’tmean the massive shift toward using disk in favor of tape to protect data.I’m also not referring to the fundamental changes occurring in storage ar-chitectures to improve its interaction with virtual server technologies northe increased usage of solid-state storage or automated storage tiering.What’s causing this big shift is the crazed passion with which the industryseems to be heading into building proprietary stacks from the server all theway to the application.

Cisco Systems Inc., for example, is building servers and partnering withVMware Inc. and EMC Corp. to create what EMC calls a Virtual Computing Environment (VCE) solution. In reaction to Cisco getting into the server business, Hewlett-Packard (HP) Co. mainstreamed its ProCurve networkinggroup within the company and bought 3Com Corp. to ensure it had a strongnetworking alternative to Cisco. And Oracle Corp. purchased Sun Microsys-tems Inc. and then made known its intention to build a complete verticalstack that will be tightly integrated (read proprietary) and use its ownvirtualization technology.

Even Hitachi Data Systems, seemingly content being a best-of-breedhigh-end and midrange storage supplier, felt it needed to do something. Itreached back to its parent company and announced its own vertical stackusing Hitachi servers, which will have a special console to integrate thestack. NetApp then went on to do its deal with Cisco and VMware as acounterpoint to EMC’s moves.

Storage vendors are scurrying to line up partners so they aren’t left out.The question is if any of this craziness is necessary or warranted. My answeris a flat “No.” I have the advantage of having seen the minicomputer revolu-tion, then the PC revolution followed by the client/server revolution. Nowwe’re undergoing a virtual “everything” revolution. I still remember a previ-ous “vertical” stack era when users chose partners for life. If you belongedto the IBM camp you lived and died by it. Ditto for Burroughs or Honeywell or Unisys. Device interoperability didn’t exist, and applications often workedon only one stack. You effectively belonged to the computer company.

For decades, the industry worked very hard to break this “own every-

read/write | arun taneja

Page 37: Storage Mag Online Sept Updated 92010

Storage September 2010

thing” mentality. We developed standard interfaces (such as SCSI) to makestorage work with many different systems. EMC did a phenomenal job tocreate a “best-of-breed” storage solution that worked with any systemthat supported a SCSI interface. Then other storage interfaces, like FibreChannel (FC), were developed; it took a few years to get the interoperabilitykinks out, but it got done. The benefit to IT has been immeasurable and hashappened across all disciplines. Applications can run on many differentOSes, APIs are available for managing devices and printers work with everysystem in the market. In addition, TCP/IP opened up a new world. We’ve finallyarrived at an era where choice matters, where best of breed matters. You stillplace your bet on a vendor, but not for everything.

Now it seems we’re heading back to the ’70s. It doesn’t matter whostarted the “stack war” or who’s partnering with whom. What matters isthat your choices are about to be taken off the table. For example, keep aneye on Oracle over the next decade;they now control hardware, database,storage and server virtualization.

The vendors’ reasons are clear, but the direction is all wrong. As a customer, you want to be able to buyEMC storage even if you have all Sunservers. Or you may want NetApp systems for NAS and EMC for SANs.Maybe 3PAR’s your choice for virtualserver storage and you favor NexsanCorp. for data archiving. Best-of-breed systems keep everyone on their toes,and I’d hate to see that disappear as each player opts to partner with others.

I can understand how the vertical stack strategy is in the best interestof Cisco or Oracle. What I don’t see is why vendors such as EMC, HitachiData Systems, NetApp and VMware would want to play this game. Theirsuccess was built on delivering best-of-breed products and being able toplay with everyone. So why limit yourself by choosing partners?

You will be the final arbiter. You’ll either let the big guys dictate what you’llbuy or you won’t. It might seem innocuous right now, but it does matter.

I like choices. I like that VMware has Microsoft Corp. and Citrix SystemsInc. to compete with, and that 3PAR, EMC, Hitachi Data Systems, IBM andNetApp are contenders for high-end storage.

Ultimately, you’ll vote with your dollars. Don’t forget: It was users whothrew out the proprietary stacks a few decades ago. You have the same kindof leverage now, but at an earlier stage in the process. It’s up to you. 2

Arun Taneja is founder and president of the Taneja Group, an analyst and consulting group focused on storage and storage-centric server technologies.He can be reached at [email protected].

37

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

It doesn’t matter whostarted the “stack war” or who’s partnering withwhom. What matters isthat your choices are aboutto be taken off the table.

Page 38: Storage Mag Online Sept Updated 92010

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

38

Aging RAID still an effective data protectiontechnologyRAID HAS TAKEN some knocks lately, like criticism that it’s a nearly 30-year-old tech-nology that can’t stand up to the rigors of a modern data storage environment. Butmaybe it’s been around so long because it’s so good: 96% of respondents to our survey rely on some form of RAID. The most-used RAID configuration isn’t much of asurprise, as 87% use RAID 5, followed by RAID 1 (52%) and RAID 10 (40%). Seventy-fivepercent of RAID users employ more than one type on RAID on their storage systems,and nearly 20% juggle four different RAID configurations in their shops. But that’s notto suggest users are totally enamored with RAID, as their two biggest gripes are in-efficient use of disk capacity (36%) and lengthy rebuild times (32%); however, 10% ofrespondents didn’t see any particular shortcomings. RAID appears to be doing its jobwell: 72% had to perform RAID rebuilds at least once in the last year and although re-builds took a little while (54% said three hours to 12 hours), 93% reported that theydidn’t lose any data. To quote one respondent: “RAID rocks!” —Rich Castagna

“ We spin about 500 TB of array storage and have yet to experience negative issues in our environment that can be attributed to RAID devices.” —Survey respondent

snapshot

How do you determine which RAID level to use?

Less than two hours

Three hours to 12 hours

13 hours to 24 hours

25 hours to 48 hours

Longer than 48 hours

20%

54%

18%

4%

3%

Have had a second drive fail in the same RAID

group during a rebuild.

0% 10 20 30 40 50 60

Storage September 2010

On average, how long did your RAID rebuilds take to complete?

63%Based on specific

application needs

45%

5% Other

8% Use storagesystem vendor’srecommendations

9% Based on disk capacity

15% Use sameRAID level on all systems

Rate the following data protection technologies in order of their importance to your company.

(Least important = 1.0, Most important = 5.0)

Rating Protection technology

4.2 RAID

3.9 Traditional backup

2.8 Continuous data protection

2.8 Replication

1.3 Cloud backup services

Page 39: Storage Mag Online Sept Updated 92010

Storage September 201039

Tim

e fo

r a

net

wo

rk t

un

e-u

pV

irtu

aliz

ing

NAS

Prim

ary

sto

rage

da

ta d

edu

pe

Qu

alit

y Aw

ards

: M

idra

nge

arra

ysJu

st s

ay n

o t

o

sto

rage

sta

cks?

STORAGE

3PAR, page 4Use 10 Best Practices to Improve Your Storage Management

VMware vSphere with 3PAR Utility Storage: Simple and Efficient VMware vSphere Deployment with 3PAR InServStorage Servers

AdvizeX Technologies, page 10AdvizeX Technologies

EMC Backup and Recovery Solutions, page 7E-Guide: Best Practices for Data Protection and Recovery in Virtual Environments

E-Guide: How Dedupe and Virtualization Work Together

Check out the following resources from our sponsors:

39