Pressures on IT Groups
Do morewith less
Keep runningglobally 24x7
Adapt rapidly to changes
Maximize value of information
IT managers need storage solutions designed to:
Consolidatestorage and data
Protect data and enable compliance
Scalenon-disruptively
Simplifymanagement
PrimaryStorage
NearlineStorage
Heterogeneous Storage Gateway
ContentDelivery
Remote/Small Office
FAS900 SeriesUnified data center class storage
NearStore®
Economical secondary storage
NetCache®
Accelerated and secure access to Web content
gFiler™
Intelligent gateway for existing storage
FAS200 SeriesRemote office and small business storage
Data ONTAP™ operating system – SAN, NAS, iSCSI, caching
NetApp Product Lines
Common software architecture; comprehensive portfolio
NetApp Storage Software Suite
Storage consolidationHA: Clustered failover
Data Access: FCP, iSCSI, NFS, CIFS
Data Cloning: FlexClone
Migration: SnapMover
RAID / Volume Mgmt: RAID-DP, SyncMirror, MultiStore, FlexVol
Data protectionBackup and recovery:
Snapshot, SnapRestore, SnapVault
Disaster recovery: SnapMirror, MetroCluster
Storage managementSRM: DataFabric
Manager, SnapDrive
Global namespace: Virtual File Manager
Application integration: SnapManager, Single Mailbox Restore Compliance
Data permanence: SnapLock, LockVault
Upgrade paths
• Flexible upgrade options• Easy data migration • Investment protection
• Flexible upgrade options• Easy data migration • Investment protection
FAS940FAS940
FAS980FAS980
FAS270FAS270
FAS250FAS250
FAS920/cFAS920/c
Server Consolidation and NAS Managing an Appliance vs Server
OS management and licensing Storage and volume management Data management and protection
Performance per/controller Act-Act protection vs Act-Pass
Not all Act-Act solutions are the same
Multi-Site management
ONTAP microkernel No service packs and hotfixes No patch level management Data-path redundancy and
resiliency File-system journaling Snapshot features Rolling upgrades on CFO
SFS97_R1 specs / NetBench Network-to-RAID storage efficiency 10s of 1000s of users all with “like
local disk performance” CFO vs DM configurations Replication / SnapMirror
Incr blk efficiency
Performance on ScalabilitySPEC SFS97R1 NFS benchmark
0
2
4
6
8
10
12
0 20000 40000 60000 80000
Thruoughput (NFS Ops/sec)
Ov
era
ll R
es
po
ns
e T
ime
(m
se
c)
FAS980c
FAS960c
FAS940c
NS700 w/Failover
NS700G Clust/CX700
Data Protection with Replicas
DataData Oct2_12
Oct2_14
Oct2_16
1.2
T
1.0
T
Oct3
Oct4
Oct5
Save 10 latest hourly (@8,10,12,14,16)Save 7 nightly roll-upsSave 4 weekly roll-upsequates to 1 mo On-Line / instant recovery
Sept30
255 SnapShots per vol Zero performance impact Efficient space consumption
○ SnapReserve is always flexible Roll back entire vol in <1sec (8TB-
16TB) w/SnapRestore Instantly available files/dirs for
recovery Higher quality of service to end-user Available as User or Admin operations
(superusers/power-users)
Design DifferencesMore than just a performance issue
A B CC’’
AFS
Oct2_12
Oct2_14
C’
A B C C’
AFS
Oct2_12
Oct2_14
C’’
Oct2_16
Fixed allocation of phsy blks in a formatted
/SnapVol
Soft allocation of X no. of blocks
Other snapshot (file-system retrofit?) NetApp SnapShots
SnapShot (things to test) Create a snap schedule with 28 snapshots
retain 10 hourly, 14 nightly, 4 weekly dump 20% new data on the volumes
○ resize the SnapReserve on the fly to add 15% more reserve delete 30% of the data and resize SnapReserve back to the
original delete 50% of the data, and run a SnapRestore
Try to hurt performance with sporadic Snapshot events Create 100 snapshots manually… what happens to
performance? Run plenty of create/change/modify operations, and create
another 100 snapshots Delete any odd number of snapshots What happens to performance, snapshot maintenance, and file-
system fragmentation?
Block Level ReplicationEven greater affect on Recovery and Resync
A B C C’
Oct2_12
Oct2_14
C”
Oct2_16
A B C C’
AFS
Oct2_12
Oct2_14
C”
Oct2_16
lev0
Incr-1
Incr-2
SnapMirror Highlights Things that are easy to test…
Robust incremental block replication ○ volume and qtree level○ dynamic bandwidth management○ resilient network congestion handling
Bi-directional Resynchronization○ break a mirror, update the DR side, bring the
new changes back to the original siteScalable and Flexible
○ Asynchronous over IP, semi-sync, and Full-Synchronous
○ IP or FC transports
Growth Drivers Rethinking storage economics:
The Basics
Investment protection: Unified storage
Redefining data protection: NearStore
Next generation storage: Storage Grid
FC-basedStorage
Tiered Storage
Cost
ATAPrimaryStorage
ATA-based Nearline Storage
Ava
ilabi
lity
and
Per
form
ance
Assumes 8-drive (7+1) RAID groupBits Per RAID Group
1.0E+09
1.0E+10
1.0E+11
1.0E+12
1.0E+13
1.0E+14
1.0E+15
1.0E+16
36GB 73GB 147GB 250GB 300GB
Uncorrectable error rate = risk of data loss
RAID-DPDisruptive Approach Enabling ATA for Primary Storage
Industry: Mirroring NetApp: RAID-DP
– 4000X better protection
– <1% performance impact
– Comparable economics
RAID-DP Overview Description
Two parity drives per RAID group Extension of RAID 4 Standard feature
Primary Benefits >4,000X better protection <1% performance impact Comparable economic efficiency
Secondary Benefits Effortless conversion Lowers RAID reconstruction impact Seamless integration with NetApp SyncMirror™
Resilient StorageReconstruction vs. Rapid RAID Recovery
Reconstruction1. Read each block from each of the
remaining drives
2. Compute block for missing drive
3. Write block to spare
For example, a 8-drive RAID group of 144GB drives (136GB usable) requires data transfers of: 7 x 136 GB read plus 1 x 136 GB
writes = 1088 GB, and Computes parity for 2.44 x108 blocks!
Rapid RAID Recovery1. Copy the “sick disk” to a spare
2. Reconstruct the few blocks that cannot be read
For example, a 8-drive RAID group of 144GB drives (136 GB usable) requires data transfers of:
1x136 GB read plus 1x136 GB writes = 272 GB, and
Computes parity for just a few
blocks.
FlexVolDramatically Higher Utilization
One command to create, expand, shrink volumes
Dynamic reclamation of unused spacePooled Physical Storage
Disks Disks Disks
Volumes: not tied to physical storage
FlexVol™ Volumes: Improving Space Utilization
Regular volumes Free space fragmented
across volumes Free space not available
to other volumes
Vol 1 Vol 2 Vol 3 Vol 4
FlexVol volumes No preallocation of free
space Free space available for
use by other or new volumes
Vol 1
Vol 2
Vol 3
Vol 4
Free
Before: Large Enterprise SW Developer
Production
Mirrored Copy
Test 1
…
Dev N
…
Test NTest 2
Dev 1 Dev 2
Challenges Copies consume lots
of disk– < 10% unique data
Copies take a lot of time– Slower time to market
After FlexCloneProduction
Test 1 Test 2
QA
Develop 1 Develop 2
Solution FlexClone
– Instantaneous copies– Low storage overhead
Faster TTM Higher quality Lower cost
Mirrored Copy
NAS(File)
Departmental
iSCSI
Enterprise
SAN(Block)
FibreChannel
Enterprise
DedicatedEthernet
SAN NASDepartmental
NetAppFabric Attached Storage
Unified Storage: Investment Protection
Large Service ProviderBefore Data ONTAP 7G
Application Overview Storage utility Growing at 1.75 TB/day Large FC SAN
Challenges Backup window Manpower for backup Timely data recovery Complex provisioning Costs
FC SAN
Windows and UNIX Hosts
Large Service Providerafter Data ONTAP 7G
Solution gFiler SAN Virtualization
– Snapshot– SnapMirror– gFiler-based back-up
Improved Productivity– Simpler provisioning– 4x faster backup
Improved SLA– Faster roll-outs– Faster recovery
Windows and UNIX Hosts UNIX
Hosts
FC SAN
FC SAN
gFiler SAN Virtualization
Recommended