Upload
others
View
17
Download
0
Embed Size (px)
Citation preview
ETERNUSDX80 S2, DX90 S2,DX410 S2 and DX440 S2
Common Features
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 0 Copyright Fujitsu, Release August 2011
Contents (1)
Reliability Controller Management RAID 5+0 & RAID 6 RAID and Hard Disk Features Dynamic LUN Configuration Data Encryption
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 20111
Contents (2)
Environmental Effort and Green Thin Provisioning Miscellaneous
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 20112
Reliability
Redundancy of DE Access Path Data Protection by Data Block Guard
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 20113
Redundancy of DE Access PathBackend CM operation secures redundancy of DE access path In normal operation each CM
have a CM Expander thatcontrols the disks in the DEsthat are assigned to it CM Expander is a CM internal
component controlling the DE access
If one CM fails but the CMExpander is still functional,the surviving CMcontinues using the CMExpander of the failedCM to manage all disks
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
℃
・・・・・・
DE
MP
EXP℃
℃・・・・・・
DE
MP
CM#0
CE
CM#1IA
Fault CM Expander continues to
be used(= Backend CM
is running)
EXP
IOM6 IOM6 IOM6 IOM6EXP EXPEXPEXP
EXPEXP
IApanel
Controls DE access
Controls the otherDE access
4
Copyright Fujitsu, Release August 2011
Data Protection by Data Block Guard 8-byte check code is added to every 512-byte data to
control data integrity on the disk and in the cache Guarantees consistency of all stored data
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
WRITE READ
Apply Check Code
User Data
CC: Check Code
A0 A1 A2
A0 A1 A2CC CC CC
(2) (3)
(3)
User Data
DISK
A0 A1 A2
A0 A1 A2CC CC CC
A0 A1 A2CC CC CC
ControllerModule
Written Data
ETERNUS DX
Write Check Code
Verify Check Code
(1)
Cache ECC Protected
Verify & delete Check Code (4)
(2)
(1)
(3)
5
Controller Management
Cache Mechanism DX410 S2 and DX440 S2 Cache Memory Configuration Multipath Functionality
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 20116
Copyright Fujitsu, Release August 2011
Cache MechanismCache memory on each CM is divided into two logical areas Local area and Mirror area of the other CM
ETERNUS DX Entry and Midrange models Entry models with single or dual CM, Midrange always with two CMs For reliability data is mirrored in cache memory between the two CMs
Cache data is backed up in case of a main power failure DX80 S2 and DX90 S2
• Onto nonvolatile memory (NAND Flash) on CM • System Capacitor Unit (SCU) on the CM provides
power during the fast copy process DX410 S2 and DX440 S2
• Onto nonvolatile memory (SSD) on CM • Battery Backup Units (BBU) supply power during the
copy process
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
CM0 CM1
Mirrored Mirrored
OriginalOriginal
2 CM configuration
Mirrored
7
DX410 S2 / DX440 S2 Cache MemoryCache memory configurations of CM#0 and CM#1 must be
identical DX410 S2
• Supported cache configurations (two CMs)• 8 GB and 16 GB
• 2 GB DIMMs available• Memory expansion kit contains always 4 DIMM modules• Populate first DIMM slots marked 1
and then slots marked 2 DX440 S2
• Supported cache configurations (two CMs)• 24 GB, 48 GB, 72 GB and 96 GB
• 4 GB and 8 GB DIMMs available• Memory expansion kit contains always 6 DIMM
modules• Populate first DIMM slots marked 1
and then slots marked 2
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
DX410 S2 DIMM slotsCM#1
1S
lot 02
Slot 1
1S
lot 22
Slot 3
CM#0
1S
lot 02
Slot 1
1S
lot 22
Slot 3
DX440 S2 DIMM slotsCM#0
1S
lot 02
Slot 1
1S
lot 22
Slot 3
1S
lot 42
Slot 5
CM#1
1S
lot 02
Slot 1
1S
lot 22
Slot 3
1S
lot 42
Slot 5
8
Copyright Fujitsu, Release August 2011
Multipath Functionality (1)All ETERNUS DX Entry and DX Midrange systems support
assigned access path configuration Active-Active / Preferred Path Active/Active
Each RAID Group has an assigned CM (RAID Group owner) that handles I/O to this RAID Group and its associated Volumes If I/O is sent to the CM that is not the RAID Group owner, this
I/O is transferred internally over the CM midplane to the owner CM This has a slight impact on the I/O performance
RAID Groups are assigned to a CM with the ETERNUS Manager GUI Either manually or automatically when a RAID Group is created It is possible to change the assigned CM later if needed
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 9
CM0 CM1
RAID#0,assigned CM = CM0
RAID#1assigned CM = CM1
ETERNUS DX
LUN-1 LUN-2
Copyright Fujitsu, Release August 2011
Multipath Functionality (2)This is a typical I/O Multipath configuration LUN1 is mapped to the server via two
ETERNUS DX ports, one on CM0 andone on CM1
Now the Multipath driver can passthe data via both physical lines to a particular Volume in the DX system In this assigned CM configuration
the LUN1 is controlled by CM0 The Multipath driver controls how the
access paths are utilized• For example, these MP drivers send data
directly to the assigned CM path as longas the CM is fully functional• ETERNUS Multipath Driver (EMpD)• VMware vSphere® Multi Path Driver
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Data
LUN-1
MP-Driver
10
Copyright Fujitsu, Release August 2011
Multipath FailoverHost Response set to Active-Active / Preferred PathThe green LUN1 is assigned to CM0 CA port of CM0 is the optimized path CA port of CM1 is the non-optimized path
The blue LUN2 is assigned to CM1 CA port of CM1 is the optimized path CA port of CM0 is the non-optimized path
For both LUNs only the optimizedpath is being used In case of a path failure the
non-optimized path is used for I/O
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
I/O request
HBA#0 HBA#1
Application Server
LUN1 LUN2
ActivePath
StandbyPath
CM1-CACM0-CA
ETMpD
ETERNUS DX S2
CM1-CACM0-CA
11
Copyright Fujitsu, Release August 2011
Controller Module Failover In case a controller fails, the RAID Group ownership is handed
over to the other controller and the operation can continue In this example the CM1 now controls the RAID Groups owned
previously by the CM0
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
CacheCM0 CM1
Cache
Local area
Local area
MirrorCM0
MirrorCM1Crash
I/O access
12
RAID 5+0 and RAID 6
Supported RAID Levels Comparison RAID 5+0 and RAID 5 Improved Reliability with RAID 5+0 RAID 6 Support Comparison Between RAID 5, 5+0 and 6
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 201113
Copyright Fujitsu, Release August 2011
Supported RAID Levels
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
RA
ID 5
[4+1
]R
AID
5+0
2x[
2+1]
RA
ID 6
[4+2
]
14
Copyright Fujitsu, Release August 2011
Comparison RAID 5+0 and RAID 5RAID 5+0 provides high performance and large capacity by
striping the data and parity blocks over two RAID 5 configurations This increases the data transfer rate in comparison to a standard RAID 5
configuration
RAID 5
RAID 5+0
Because RAID 5+0 consist in practice of two RAID 5 sets it enables a RAID Group with double the capacity of RAID 5 RAID 5 can be set up from 2+1 up to maximum 15+1 disk configuration RAID 5+0 can be set up from 2 x (2+1 up to maximum 15+1)
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
RAID 5+0 (3D+1P) x 2
D3-1
D2-1
D1-1
P3
D2-2
D1-2
D3-2
P2
D1-3
D3-3
D2-3
P1
D3-4
D2-4
D1-4
P3”
D2-5
D1-5
D3-5
P2”
D1-6
D3-6
D2-6
P1”
Equivalent to RAID 5 (3D+1P) Equivalent to RAID 5 (3D+1P) 8 drives
Striping
P3D3-1
D2-1
D1-1
D2-2
D1-2
D3-2
RAID 5 (3D+1P)
D3-3
D2-3
P1
P2
D1-34 drives
"D" = data area"P" = parity area
15
Copyright Fujitsu, Release August 2011
Improved Reliability with RAID 5+0RAID 5+0 One drive from each of the RAID 5 sets could fail without loss of data After a single disk failure, RAID 5+0 has a shorter rebuild time than
RAID 5 with the same number of data disks• RAID 5+0 has less disks per RAID 5 set and therefore less data disks to use for
the re-calculation
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
D3-1
D2-1
D1-1
D3-2
D2-2
D1-2
D3-3
D2-3
D1-3
RAID 5 (6D+1P)
D3-4
D2-4
D1-4
P3
D2-5
D1-5
D3-5
P2
D1-6
D3-6
D2-6
P17 drives
Rebuild time will be long due to increase in number of drives
"D" = data area"P" = parity area
Shorter rebuild time than 6D+1P
D3-1
D2-1
D1-1
P3
D2-2
D1-2
D3-2
P2
D1-3
D3-3
D2-3
P1
D3-4
D2-4
D1-4
P3”
D2-5
D1-5
D3-5
P2”
D1-6
D3-6
D2-6
P1”
4+4 drivesEquivalent to RAID 5 (3D+1P) Equivalent to RAID 5 (3D+1P)
Small number of drivesRAID 5+0 (3D+1P) x 2
16
Copyright Fujitsu, Release August 2011
Improved Performance with RAID 5+0RAID 5+0 writes faster than RAID 6 due to lower load in writing RAID 6 needs more calculation time during write for double parities RAID 5+0 can recover from double disk failure providing the disks fail
from different RAID 5 sets
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
RAID 6 (6D+2P)
D3-1
D2-1
D1-1
D3-5
D2-5
P1
D3-2
D2-2
D1-2
P3
D2-3
D1-3
Q3
D2-4
D1-4
D3-3
P2
D1-5
D3-4
Q2
D1-6
D3-6
D2-6
Q18 drives
Striping
RAID 5+0 (3D+1P) x 2
D3-1
D2-1
D1-1
P3
D2-2
D1-2
D3-2
P2
D1-3
D3-3
D2-3
P1
D3-4
D2-4
D1-4
P3”
D2-5
D1-5
D3-5
P2”
D1-6
D3-6
D2-6
P1”
8 drives
Faster data transfer with RAID5 (3D+1P) by striping
Equivalent to RAID 5 (3D+1P) Equivalent to RAID 5 (3D+1P)
- Two parity drives are created- High loads when writing data
Able to recover from twoconcurrent drive failures
Recover from single drive failure in each RAID 5 (3D+1P)
"D" = data area"P" = parity area
17
Copyright Fujitsu, Release August 2011
Further Improved Reliability with RAID 6Tolerates two simultaneous hard disk failures in the same
RAID Group independent of the disk position RAID 5+0 can recover from double disk failure only providing the disks
do not fail in the same RAID 5 set This is advantageous when using large capacity hard disks that need a
long time for rebuild• For example Nearline SAS disks
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
RAID 6 (6D+2P)
D3-1
D2-1
D1-1
D3-5
D2-5
P1
D3-2
D2-2
D1-2
P3
D2-3
D1-3
Q3
D2-4
D1-4
D3-3
P2
D1-5
D3-4
Q2
D1-6
D3-6
D2-6
Q1
Uses two different parity blocksfor the recalculation of missing data
8 drives
Able to recover from twoconcurrent drive failures
Striping
"D" = data area"P", "Q" = parity area
18
Comparison Between RAID 5, 5+0 and 6 (1)RAID 6 has advantages in reliabilityRAID 5 has advantages in data efficiencyRAID 5+0 has advantages in write performance
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Data efficiency
Writeperformance
RAID 5RAID 6
Reliability
Writeperformance
RAID 5+0
RAID 5 RAID 6
RAID 5+0
*1; *2; *3 see next page
Reliability 1) Data efficiency 2) Write performance 3)
RAID 5 OK Very good GoodRAID 5+0 Good Good Very goodRAID 6 Very good Good OK
19
Comparison Between RAID 5, 5+0 and 6 (1) 1) Data Reliability RAID 5: Is able to recover from single drive failure RAID 5+0: Is able to recover from double drive failure on the same RAID
set• One drive in each RAID array can fail
RAID 6: Is able to recover from double drive failure on the same RAID set
2) Data efficiencyWhen comparing equal sized data capacities
• RAID 5 (6+1) compared to RAID 5+0 (3+1) compared to RAID 6 (6+2)
3) Write performance RAID 5+0 writes the two RAID 5 sets in parallel
• RAID 5 (6+1) compared to RAID 5+0 (3+1)
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 20
Features of RAID Control
Control of a Failed Disk Global and Dedicated Hot Spare Hot Spare Assignment Rules S.M.A.R.T. Redundant Copy Triggered by S.M.A.R.T. Quick Format Drive Patrol
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 201121
Control of a Failed DiskTwo different modes are supported Rebuild
• ETERNUS DX rebuilds the data ofa failed drive from the remaining drives and writes it to a Hot Spare • When no Hot Spare is available
ETERNUS DX rebuilds the dataafter the defective drive is replaced
• Redundancy of the RAID Group is recovered after rebuild is completed
Copy Back• After redundancy of the RAID
Group is re-established, data iscopied back from the Hot Spareto the replaced drive
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
HS
Diskreplacement
Failed
Failed
Without redundancy
Recover redundancy
New
Without redundancy
Recover redundancy
FailedDiskreplacement
New
Rebuild
Copy Back
RebuildFailed
HS
22
Global and Dedicated Hot SpareDedicated Hot Spare (DHS) disk Has to be assigned to a specific
RAID GroupWhen a disk of a RAID Group with
a Dedicated HS fails, then thisDedicated HS is selected and used to rebuild the RAID Group
Global HS (GHS) Used by all RAID GroupsWhen a disk in any RAID Group fails
and there is no Dedicated HS available, a Global HS is used torebuild the RAID Group
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
GHS
DHS
DHS
: Global Hot Spare
: Dedicated Hot Spare for RAID Group D
: Dedicated Hot Spare for RAID Group E
ETERNUS DX S2
RAID Group D
RAID Group E DHS
DHS
Failed
RAID Group C
RAID Group B
RAID Group A GHS
23
Hot Spare Assignment RulesETERNUS DX systems search for an appropriate HS disk and
assign it to each RAID GroupThree search algorithms are used Search 1
• Take a Hot Spare disk with the same capacity and matching rotation speed as the failed drive
• Search through drive numbers in ascending order Search 2
• Take a Hot Spare disk with matching rotation speed and larger capacity• Priority is given to HS disks with similar capacity as the failed drive
Search 3• Take a Hot Spare disk with matching rotation speed• Faster rotating drives have a preference
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 24
Automatic Hot Spare Assignment (2)This example shows the search algorithm for an automatic Hot
Spare disk assignment
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
PriorityAssignment
Hot SpareSearch criteria
2.5"300 GB10k rpm
2.5"450 GB10k rpm
2.5"600 GB10k rpm
3.5"450 GB15k rpm
3.5"2 TB
7.2k rpm
Disk#008(HS)
Disk#007(HS)
Disk#005(HS)
Disk#004(HS)
Disk#006(HS)
1 2 3 4 5
Same number of rotations
Same capacity
Same number of rotations
Similar capacity
Same number of rotations
Larger capacity
Different numberof rotations
Faster
Different number of rotations
Slower
Search 1 Search 2 Search 3
Disk#000 Disk#001 Disk#002 Disk#003
RAID Group
2.5"300 GB10k rpm
2.5"300 GB10k rpm
2.5"300 GB10k rpm
2.5"300 GB10k rpm
Failure
25
S.M.A.R.T.Self-Monitoring Analysis and Reporting Technology Built-in mechanism of the disk drive collecting various information
• ETERNUS DX monitors the SMART data to recognize early signs of disk failure Makes it possible to identify failing disk drives before data redundancy in
the RAID Group is lost• Redundant Copy is triggered when certain SMART alerts occur or thresholds
are exceeded Disk drives are not detached immediately when a threshold is exceeded
• First the data of this flagged disk is rebuilt to a HS disk• The flagged disk is still member of the RAID set• Only after the data rebuild has completed the flagged disk will be detached• After disk replacement the data from the Hot Spare disk will be restored with
the Redundant Copy function
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 26
Redundant Copy Triggered by S.M.A.R.T.After a S.M.A.R.T error threshold is
reached, the disk is flagged andthe Redundant Copy processstarts A rebuild to a Hot Spare drive is
started automatically whileredundancy is maintained
Flagged disk still a member of theRAID Group
The flagged drive is removed fromthe RAID Group and set faulty afterthe rebuild is completed
Redundant Copy started by S.M.A.R.T is interruptedimmediately if another completedisk failure occurs and there areno further HS disks available
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Remove thefailed drive
Hot Spare driveRAID 5 (4+1)
Kee
p re
dund
ancy
Incorporate the HS drive toRAID and remove the drive
showing signs of failure
HS
RAID 5 (4+1)
Restore data from a failed drive and write it to HS
HS
RAID 5 (4+1)
27
Quick FormatHost can access Volumes of a RAID Group while formatting of
a Volume is in progress (1) ETERNUS DX first creates a format control table
• To manage formatted and unformatted blocks (2) After the format control table is created
• The LUNs become accessible from hosts • ETERNUS DX starts the physical format for the Volumes from the first block
and process all blocks sequentially (3) If ETERNUS DX receives "Read/Write I/O" to any unformatted block,
the block is first formatted and then access is allowed NOTE: ETERNUS will continue Quick Format after a power cycle
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Offline OnlineE Read/WriteHost view
Behavior of ETERNUS DX
Creation of formatcontrol table
Format Start Format Completed Read/WriteRequest One Point Format
Physical Format (Sequential)
Read/WriteCompleted
S
(1) (2)
(3)
28
Drive PatrolDrive Patrol improves hard disk reliability A background process reads
the data on drives If an error is detected, the faulty
data is recreated The recreated data is written to
another block on the same disk drive
Disks can be selected for Drive Patrolin ETERNUS Manager GUI bydifferent categories, for example HS disks New installed disks Unused disks
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Parity
RAID Group
Error detection
Check of drive media
Read out
RAID Group
D 1 D 2 PD 3D 1
Write back
D 1
29
Dynamic LUN Configuration
Dynamic LUN Expansion Logical Device Expansion LUN Concatenation RAID Migration
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 201130
Dynamic LUN Expansion The ETERNUS DX Series is able to expand the capacity of
LUN (Logical Unit Number) without stopping the operation
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
ExpandLogical device expansion
LUN Concatenation
LUN0 New
added driveCurrent
RAID GroupRAID 5 (4+1)
LUN0 Unused area
Expand RAID Group by "Logical Device Expansion"
RAID Group nowRAID 5 (5+1)
LUN0
Expand LUN0 capacityby "LUN concatenation"
RAID GroupRAID 5 (5+1)
31
Logical Device ExpansionExpand capacity (free space) of a RAID Group by adding
unused disk drives RAID level can be changed
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Expands
Existing RAID GroupRAID 5 (3+1)
Unused disksnot assigned toany RAID Group
Same RAID GroupRAID 5 (5+1)
32
LUN ConcatenationLUN capacity can be expanded by using LUN concatenationConsolidates unused areas in the same or across multiple
RAID Groups for efficient use of drivesNOTE: After LUN Concatenation it may be necessary to adapt
the Operating System and/or application to recognize the increased LUN sizeMaximum LUN capacity is 128 TBMinimum LUN capacity is 1 GB Maximum number of LUNs that can be concatenated is 16 Able to mix different RAID levels
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
RAID 5 450 GB x4
LUN1 600 GB
RAID 5 450 GB x4
Unused area 1.2 TB
RAID 5 450 GB x4
LUN1 600 GB
RAID 5 450 GB x4
LUN0 1.2 TB concatenateLUN2 600 GB
RAID Group 1 RAID Group 2
RAID-Grp1
RAID Group 2
LUN0 1.2 TB LUN2 with 1.8 TB
RAID Group 1
33
RAID MigrationEnables data to be moved between different RAID Groups
without interrupting operation Migrate existing LUNs from a RAID Group to a different one Migration between different RAID levels RAID migration to expand LUN capacity Migration is transparent from server point of view DX80 S2, DX90 S2, DX410 S2 and DX440 S2 systems allow to migrate
concatenated LUNs
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Migrate data to large capacity drives
Migrate data to a high reliable RAID level
LUN 0
RAID 5 300 GB
Unused 600 GB
RAID Group 1
RAID Group 2
Unused 300 GB
LUN 0 RAID 5 600 GB
RAID Group 1
RAID Group 2
RAID 5 600 GB
Unused 600 GB
LUN 0
RAID Group 1
RAID Group 2
Unused 600 GB
RAID 1+0 600 GB
LUN 0 Mirroring
LUN 0
RAID Group 1
RAID Group 2
34
Data Encryption
Two Types of Data Encryption SED Encryption Encryption Using Firmware
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 201135
Two Types of Data EncryptionDisk encryption with SED (Self Encrypting Disk)Disk encryption by ETERNUS firmware
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Encrypt. Encryption
Plain data
Encryption
Encryption
Encryption Encryption
Encryption Encryption
Encryption
Encryption
Encryption
Encryption
Encryption
Encryption
Data removalprotection
Encryption setting and management
Server A Server B Server C
36
SED (Self Encrypting Disk)Authentication key for SED encryption set up with ETERNUS
Web GUIEncryption method is 128-bit
AES (Advanced EncryptionStandard)Data is fully encrypted in the
drives The encryption engine guarantees
full disk interface bandwidthwithout affecting the performance
Transparent for the end user and the ETERNUS DX system
Data is encrypted in units ofRAID Groups
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
CMCache
SED
Authentication key
(hash value)Encryption key
(Encrypted data)
Authentication key
(plain data)
Data(Encrypted)
1. Authentication
Authenticationkey
Data(Plain data)
2. decryption
3. Encrypts/Decrypts data
37
Data Encryption Using FirmwareSelectable via ETERNUS Web GUI Fujitsu's unique encryption algorithm 128-bit AES encryption
Encrypts user data in units of LUN Both encrypted and unencrypted
data can exist in the same RAID Group
Possible to encrypt an existing unencrypted VolumeAll disk types can be encrypted
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
CM
Disk
Encryption key(Encrypted)
Encryption key
(plain data)
Master key
Data(Plain data)
1. Decrypts key
Data(Encrypted)
Encryptionbuffer
CACHE
Data(Encrypted)
2. Encrypts/decrypts
data
38
Environmental Effort and Green
Power Saving Eco-mode Saving Power and Space with Latest Drives Visualization of Power Consumption and Temperature
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 201139
Power Saving Eco-mode (1)Reduces power consumptionWhen drives (RAID Groups) are used only a few hours per day
(for example for backup), Eco-mode is an effective way of saving power Drive spin down can be scheduled
• Drives are spun down if idle during the schedule time Drives are spun up automatically when accessed Can be managed with GUI or CLI of ETERNUS DX series
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
AM
12on
offPMoff
12・・・
・・・ETERNUS DX
Data volume600 GB SAS x 36
RAID 5 (5D+1P), 3 TB
All backup Volume drives are "ON" only for five hours a dayUp to 15% power consumption reduction!!
Backup volume1 TB Nearline SAS x 36RAID 5 (6D+1P), 6 TB
5
40
Power Saving Eco-mode (2)Eco-mode allows the disks to be spun down for specified
periods and thus reduce the overall power consumption A schedule to spin up and spin down the disks can be set per RAID
Group
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 41
Power Saving Eco-mode (3)Eco-mode and MTBF Eco-mode has a very small impact on the MTBF value
• For example, Eco-mode setting for spinning up and down is 3 times per day on each disk • In 5 years Eco-mode makes the disk to spin up and down a total of 5475 times which is
far less than the given MTBF• The disk manufacturer gives an MTBF of 50.000 spin up/down cycles
How long does it take to spin up the disks? Typical spin up time is
• SAS disk => 15 sec• Nearline SAS => 20 sec
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 42
Saving Power & Space with Latest DrivesETERNUS DX systems use latest disk drive technology to
save power 2.5" disks have by nature lower power consumption, SSD disks very low Reduced physical size reduces system footprint and consequently data
center cooling costs
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Pow
er c
onsu
mpt
ion
per d
rive
capa
city
Driv
e ge
nera
tion
About 50 % reduction in two years
Pow
er c
onsu
mpt
ion SAS
Nearline SAS
(year)
450 GB
1 TB
600 GB
2 TB
200 GBSSD
2009 2010
3 TB
400 GB
900 GB
100 GB
43
Visualization of Power and TemperatureETERNUS SF Storage Cruiser (optional software) Enables administration users to monitor power consumption and ambient
temperature data and visualize statistic information of each operation• Real time monitoring• History log is available by day, week or year• One or a group of ETERNUS DX systems can be monitored
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Storage management softwareETERNUS SF
Storage Cruiser
Obtain power consumption and temperature of each
registered ETERNUS System
ETERNUS DX
44
Thin Provisioning
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 201145
Introduction to Thin ProvisioningSimplifies the creation and allocation of storage capacity System can be configured future proof instead of being bound to
currently available physical storage Application can use capacity-on-demand from a shared storage pool Administrator can monitor and replenish each Pool, not each Volume
Storage perceived by the application is larger than the physically available storageThin Provisioning (TP) Pool utilizes TP Volumes Several TP Pools can be created and used in parallel
TP requires a license key
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Application 1
Application 2
Application 3 Allocated Allocated Allocated
Common TP
Storage Pool
Host Reported Capacity
Actualused
Capacityby Application
46
Thin Provisioning allocates volume capacity via virtual volumes Effective use of storage capacity reduces initial investment (start small) Possible to increase available storage capacity to suite changing
business needs• There is no need to add storage capacity per volume or application utilization
Thin Provisioning Function
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
2 TB is enough for theyear ahead, but we will need10 TB five years from now.
Server
Logical capacity matchesfuture requirement
Virtual volumes(10 TB logical volume)
Write
Write
ETERNUS DXPhysical capacity onlymatches current requirements
Physical drive (2 TB drive pool)
ETERNUS SFStorage Cruiser
Write data
New Threshold
No need to add/changesystems even if physical
drives are added
After adding physical drivesto the TP Pool
Newaddedcapacity
Add drives
Threshold alert
Threshold warning
47
Miscellaneous
Email Notification Supporting SNMP, SMI-S, Syslog Function Access Authority Setting for UserWake-on-LAN (WOL) Redundant IP Host Affinity
Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 Copyright Fujitsu, Release August 201148
Email NotificationWhen an event occurs, email can be sent by ETERNUS DXWithout additional management software
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Without built-inEmail functionality
ETERNUS DX S2
Event
Error
Error
ETERNUS DX
Mail serverManagement software
SNMP Trap or polling
of device status
Creating E-mail after analyzing
information from device
Email created by device Mail server
Operation other than Alarm can be triggered, for exampleInformation, Warning, Error, Enhanced log contents forexample Parts type, Installation positioning information
Administrator /Engineer
Administrator /Engineer
Note: The conventional technology can also be used
49
Supporting SNMP, SMI-S, SyslogETERNUS DX supports SNMP v1, v2c and v3
• Can be used to receive traps for ETERNUS information and notifications SMI-S Version 1.4
• Possible to manage ETERNUS DX by using storage management software complying with SMI-S
Syslog daemon complying with RFC3164, RFC5424• Event logs can be sent to an external server• Any server that can receive messages that conform to RFC 3164 (the BSD
Syslog Protocol) can be used as the Syslog server, for example• Syslog daemon in standard Unix or Linux• Microsoft Operations Manager (MOM)
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 50
Access Authority Setting for UserTwo default user accounts are available "root" with default password "root" "f.ce" with default password "<check code><serial-no. of DX>"
ETERNUS DX restricts the available administration functions per user account role to balance between what the particular user roles needs to do and is allowed to doRBAC (Role Based Access Control) Assigns roles and access authorities when creating an ETERNUS DX
user account
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Role Available functional rangeMonitor Status display
Administrator All settings excluding maintenance work
Storage Admin Status display, RAID group settings, Volume settings, Host connection settings, etc.
Account Admin Status display, User account settings, Authentication settings, Role settings
Security Admin Status display, Security settings, Maintenance information
Maintainer All settings including maintenance work
51
Wake-on-LAN (WOL)WOL can be enabled for each LAN port of the ETERNUS DX
S2 systems with ETERNUS Web GUI or CLI A Magic Packet sent to DX LAN MAC address can be used to remotely
power on the ETERNUS DX system
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
Sends a Magic Packet ( XX-YY-ZZ-AA-BB-CC)
with WoL utility tool
XX-YY-ZZ-AA-BB-CC
Activate systemremotely
52
Redundant IP (1)Possible to set different IP addresses within the same subnet
to each CMWhen the role is changed between the CMs, the Master and Slave keep
their IP addresses (see next slide) Maximum of two IP addresses in two subnets can be set per CM The FST port is available in ETERNUS DX410 S2 and DX440 S2 only
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
CM0M
NT
RM
T
FST
CM1
MN
T
RM
T
FST
Switch Switch
IP x.x.x.a IP x.x.x.bIP y.y.y.c IP y.y.y.d
53
Redundant IP (2)When Master CM is changed IP Address of previous Master CM is taken over by new Master CM IP address of previous Slave CM is taken over by new Slave CM
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2
FST
Master SlaveETERNUS
CM#0
Switch
CM#1
172.168.30.10 172.168.30.20
FST
MasterSlaveETERNUS
CM#0
Switch
CM#1
Link Down 172.168.30.10
CM#1
FST
Master SlaveETERNUS
CM#0
172.168.30.10
Direct Connection
Connection using switch / hub
LAN Link Down
FST
Master SlaveETERNUS
CM#0
Switch
CM#1
172.168.30.10 172.168.30.20
54
Host AffinityTwo different mapping procedures are configurable LUN Mapping = LUN is mapped per ETERNUS DX CA port
• Volumes that are mapped directly to a CA port or CA ports• Are accessible from each host that has physical access to that or those CA port(s)• A particular host cannot be prevented from access
HBA Mapping = LUN is mapped per HBA hardware address (WWPN / iqn / SASPN)• Volumes have to be grouped in the ETERNUS DX system
• Volumes are mapped to LUNs• These Volume groups are assigned to HBAs of one host or HBAs of different hosts via
CA ports• New Volumes can be added to an existing Volume Group
NOTE: One CA port can only support either LUN mapping or HBA mapping
Copyright Fujitsu, Release August 2011Common Features ETERNUS DX80 S2, DX90 S2, DX410 S2 and DX440 S2 55