Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
2 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
agenda
Conclusion5
The NAS Layer4
General Parallel File System (GPFS)3
The Scale-out File Services (SoFS) approach2
NAS today1
3 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Source: IDC Disk Systems Forecast, May 2007
External Disk Shipment Forecast
173239 306
462784
13002190
3538
$126,00$72,20
$43,90$31,20
$19,55$13,47
$8,35$5,50
$3,63$2,37
$1,54$0,99
21798 17256 13433 14414 15327 17511 18287 19459 20687 21802 23012
56999199
2420314943
23961
0,1
1
10
100
1000
10000
100000
2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
PB
$/GB
M$
4 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Customer’s Feedback on Current NAS
Customers have increasing demand for file services. The growth of so-called “unstructured data” outpaces that of databases by far.
- But current NAS solutions do not scale. File systems are limited to a few Terabyte and a few Million Objects
- So customers have to add box by box and manage them individually
- No way to apply Policys across this independent data islands
Parallel access to data gets more and more common, especially in the digital media space
We see requirements on data access rates and response times to individual files that have been unique to HPC environments before
Making file servers clustered is easy in theory but hard to build and maintain, there is too much complexity
Migration, integration or removal of storage for file services is a nightmare so a lot of legacy systems still exist and need to be maintained
Backup windows are a big issue and get worse while the amount of data continuously increase
Integration of Information Lifecycle Management (ILM) functions into the file system becomes more important as the amount of TB's explode
5 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
agenda
Conclusion5
The NAS Layer4
General Parallel File System (GPFS)3
The Scale-out File Services (SoFS) approach2
NAS today1
6 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SoFS Quick Facts SoFS combines a Clustered Filesystem with a Global Namespace. It‘s deployed centrally,
managed centrally, backed up centrally and grown centrally.
SoFS scales horizontally not vertically. When you need more capacity, you just add more disks. When you need more performance you add nodes and/or disks. There are almost no architectural limits for future growth.
NAS boxes growth vertically. You can add controllers and storage up to the maximum configuration of the box. Beyond that you have to buy a new one.
SoFS provides integrated Information Lifecycle Management
- Different tiers of storage, e.g. FC, SATA and tape
- Policy driven placement and migration of files over their entire Lifetime
SoFS offers synchronous Block Level replication for data and Metadata
SoFS offers multi-site configurations via the cross-cluster mount functionality.
SoFS allows to build Highly Available Systems including Disaster resistant Systems across multiple sites
➔SoFS solves the root cause of the lack of scalability and performance in todays NAS systems instead of patch around it as most other Scalable NAS or Global Namespace Solutions do
7 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SOFS overview: A ‘Scale Out NAS’ design, which provides: ‘Commodity’ Hardware based design
Product is comprised of x86 servers and inexpensive disks. Also supports numerous storage devices via SVC (disk) and TSM (disk, tape, optical).
Scaling is done with an Army of Ants and not with a single Big Elephant
Based on GPFS (General Parallel File System)
A proven platform over 10 years old which has numerous customer installs of multi-thousand node clusters and multi-PB filesystems. Designed for extreme performance, scalability, and availability – features distributed metadata, wide striping of data, byte range locking, parallel I/O, variable block size, extensive tuning parameters. Reference www.top500.org
Integrates Clustered CIFS and Clustered NFS via CTDB, which provides:
Transparent, non-disruptive failover of CIFS and NFS with no client side changes
CIFS performance of over 700MB/sec from one node to one client and NFS performance over 800MB/sec
Integrated ACL’s between Unix, Linux and Windows, supports NFSv4 and NTFS access
Unparalleled aggregate performance scaling, Intelligent load balancing, simultaneous access to a single file by heterogeneous clients, dynamic thin provisioning.
Provides Scalable Tiered Storage (ILM), via a powerful metadata policy engine - migration without path changes.
Rule sets for initial placement and migration of individual files spanning move, copy, delete, backup, restore, replicate, snapshot, etc. Immutability flag planned for future release. Features no stubs on disk to disk migrations.
Recent benchmarks show processing 1 Billion files in less than 15 minutes
Provides Storage Tiering/Pooling for ALL Storage Technologies (SSD, SAS, FC, SATA, iSCSI)
8 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SOFS overview: A ‘Scale Out NAS’ design (cont) Integrated Backup / Restore and DR Capabilities even for multi-PB Systems
Low Level interlock into TSM provides scalable incremental forever backups and intelligent policy based restore, allowing a multi-PB system back online with critical data in minutes or hours, not weeks or months.
Low Level integration into HSM to provide Quick Disaster Recovery of Metadata with Stub File Creation (Restores a Filesystem without any Tape Movement)
Multi Site Replication (synchronous as well as asynchronous) integrated in Filesystem Layer
Full management suite included
End-to-End Monitoring of all included Components (Network, SAN, Storage, Servers, OS, NAS Components)
Tools for user, Group and Project level quotation
Self Service Space Creation Utilities with pre-defined user templates
System utilization and Statistic Reporting
end to end performance reporting Capabilities
Commandline, Scripting and Graphical Management Capabilities
Distributed Datacenter Design
Data exchange Protocol for Intersite Communication (to prevent Throughput limitations at high Latency's)
Global Namespace (integration of existing NAS devices at Namespace Level) with User Locality Feature
Remote Caching Capabilities
9 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SoFS
parallel file system
clustered NAS services
CIFS NFS HTTPS FTP
IP Layer
Classic Filers
DFS AD
local file system
NAS services
CIFS NFS
local disk
What is a Scale-out NAS Solution ?
shared SAN disks
storage pools
From a client perspective it is yet another filer
In the datacenter it dramatically changes the handling of your Data
10 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SoFS vs. NAS virtualizers
SoFS
- True clustered NAS
- All nodes serve all files- Maximum single file performance is equal to
the aggregated performance of the cluster
- No disk, backup, management, etc. islands
- No bottlenecks on single directory branches
Virtualizer (VFM, StorageX, ONTAP GX)
- No clustering, just some pretty paint on an island topology
- Each individual file is pinned to a single NAS filer
- Maximum single file performance is equal to the performance of the hosting filer
- Possible bottlenecks on individual directory branches
- Islands regarding disks, backup, etc.
/
/sales
/finance
/web
/
/sales
/finance
/web
11 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SoFS
IP Layer
Classic Filers
DFS AD
The SoFS Story: Performance & Scalability
/
/sales
/finance
/web
/
/sales
/finance
/web
parallel data flowall nodes serve all files
each file is pinnedto a single filer
12 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Does Performance matter ?
729 MB/sec from a single Windows Client to a single SoFS Node !!
13 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
The SoFS Story: Utilization & ILM
Capacity managed individually
Avarage Utilization usually <50%
All files online, but 80% haven't been accessed during the last 6 months
Capacity managed centrally.
Average utilization >80%
Policy driven file movement between tiers, the most inactive files can be migrated to tape
SoFSClassic Filers
With SoFS you just buy the capacity you really need
$$ $$ $$ $$
14 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
The SoFS Story: Integrate new Storage
Replace a storage system in 3 steps without any service interruption
SoFS
SoFS makes adding and replacing storage one of the easiest things in theworld.
No more planned downtimes and weekend shifts to install storage.
1. Attach new storage system
2. Tell SoFS to redistribute data excluding thedecomissioned system
3. Remove storage system
15 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
The SoFS Story: Administration
“I loved my first filer. It was so easy to administrate. When we installed the 20th I started loathing them.”
Each filer has to be individually administrated
Operation costs grow linear with the amount of filer
The SoFS cluster is managed centrally
Thanks to ILM you manage storage tiers instead of individual disks
Thanks to easy growth and data migration you do not have to plan changes weeks upfront
SoFSClassic Filers
SoFS makes operation easy
16 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
The SoFS Story: Backup
TSM is tightly integrated into SoFS
The scan for changes in the file system is not done by the TSM backup client, but by the ILM engine of the file system.
We scan >15 million files per minute
This means the backup window is more or less reduced to the time needed to copy the changes onto tape
With LAN-free backup this can be done with the bandwidth of your SAN environment taking load off your TSM server
SoFS
LAN-free
TSMServer
Metadata
Data
17 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
agenda
Conclusion5
The NAS Layer4
General Parallel File System (GPFS)3
The Scale-out File Services (SoFS) approach2
NAS today1
18 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Overview IBM’s General Parallel Filesystem (GPFS) available 1996
Used in other IBM Products (BIA for SAP, VTS)
Used on many of the largest supercomputers in the world.
- Cluster: 1000+ nodes, fast reliable communication, common admin domain.
- Shared disk: all data and metadata on disk accessible from any node through disk I/O interface.
- Parallel: data and metadata flows from all of the nodes to all of the disks in parallel.
High performance
- Multi-Terabyte files, Multi-Petabyte file systems.
- Wide striping, large blocks, many GB/s to single file
Highly Reliable
- Can survive Disk and Node failures
- Allows Split site Configurations
Storage
StorageNetwork
19 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
GPFS: ASC Purple/C Supercomputer
1536-node, 100 Teraflop IBM System P cluster at Lawrence Livermore National Laboratory
2 PB GPFS file system (one mount point)
500 RAID controller pairs, 11000 disk drives
126 GB/s parallel I/O measured to a single file (134GB/s to multiple files)
20 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
GPFS: Information Lifecycle Management
ILM Support basic Technology
- Storage pool – group of LUNs
- Fileset - define subtrees of a file system
- Policies – for rule based management of files inside the storage pools
What does it offer
- One global file system name space across a pool of independent Storage
- Files in the same directory can be in different pools
- Files placed in storage pools at create time using policies
- Files can be moved between pools for policy reasons
- Can be used for hierarchical arranged storage based on files
- Allows classification of data according to SLAs
Bronze
StorageArea
Network
SilverGold
Tape
System
21 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Storage Policies
Rules to control the placement, migration, and retention of files
Declarative SQL-like language
Rule types:- Placement policies, evaluated at file creation, example
rule 'home' set pool 'gold' for fileset 'home_sven'
rule 'otherfiles' set pool 'silver'
- Migration policies, evaluated periodically
rule 'cleangold' migrate from pool 'gold' threshold (90,70) to pool 'silver'
rule 'hsm' migrate from pool 'sata' threshold(90,85) weight(current_timestamp – access_time) to pool 'hsm' where file_size > 1024kb
rule 'cleansilver' when day_of_week()=Monday migrate from pool 'silver' to pool 'bronze' where access_age > 30 days
- Deletion policies, evaluated periodically
rule 'purgebronze' when day_of_month()=1 delete from pool 'bronze' where access_age>365 days
- There are Policies for Backup/Archive, Restore/Retrieve as well
22 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
File Placement is also useful for Databases
Databases by default places index and data table spaces in the same directory
File placement policies can direct the data to a pool with appropriate performance characteristics using file extension.
- Data SMS table spaces have a .DAT extension
- Index SMS table spaces have an .INX extension
RAID 1Index_pool
RAID 5data_pool
/DATA/MYDB/ SQL00002.DATSQL00002.INXSQL00003.DATSQL00003.INXSQL00004.DATSQL00004.INX
Rule ‘Data rule’ set pool ‘index_pool’Where UPPER(NAME) like ‘%.INX’
© Copyright IBM Corporation 2008
IBM Systems & Technology Group
Scan 1 Billion files in 95 minutes on 1 node!Less than 30 on 4 and less that 15 on 8!
Using 8x IBM eServer xSeries 336 (3.2GHz Intel Xeon)attached at 4Gb/s to
IBM DS-4800 (2.1 TB RAID1) & IBM DS-4100 (10.2TB RAID5)
Scalable ILM Policy Scan Time
0102030405060708090
100
1 2 3 4 5 6 7 8
Number of nodes
Tim
e i
n m
inu
tes
Directory Walk Inode Scan Total Scan Time
24 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
GPFS: Replication & Cross-cluster mounts
Replication
- Synchronous intra-cluster replication on file level handled by GPFS nodes
- Replication can be set up for subsets of the file system
Cross-cluster mounts
- In a setup with two distinct GPFS clusters, one can mount the file system owned by the other cluster
- All locking and metadata operations are routed through the owning cluster, block I/O can be done directly by the cross-mounting cluster
- The cross-mounting cluster can mount the file system into one of its own file systems (single mount point).
25 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Snapshot Support
/fs1/file1/fs1/file2/fs1/subdir1/file3/fs1/subdir1/file4/fs1/subdir2/file5
/fs1/file1/fs1/file2/fs1/subdir1/file3/fs1/subdir1/file4/fs1/subdir2/file5/fs1/.snapshots/snap1/file1/fs1/.snapshots/snap1/file2/fs1/.snapshots/snap1/subdir1/file3/fs1/.snapshots/snap1/subdir1/file4/fs1/.snapshots/snap1/subdir2/file5
Read-only copy of directory structure and files
Only changes to the original file consume disk space
mmcrsnapshot fs1 snap1
Writing dirty data to disk Quiescing all file system operations Writing dirty data to disk again Creating snapshot. Resuming operations.
Creating a snapshop
26 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Windows VSS Snapshots Integration
•Integrated into Windows Explorer using the Volume Shadow Copy Services (VSS)
27 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
agenda
Conclusion5
The NAS Layer4
General Parallel File System (GPFS)3
The Scale-out File Services (SoFS) approach2
NAS today1
28 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
IBM Service
What is inside SoFS ?
IBM Storage
IBM BladeCenter
Enterprise Linux
IBM GPFS
CTDB & NAS Exporter
SoFS Mgt Extensions Tivoli S
torage Manager
29 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Another View at the SoFS Components
GPFS
shared SAN disks
storage pools
ctdb
Linux kernel
CIF
S
nfs
vsft
p
apac
he
ssh
d
CIF
S
nfs
vsft
p
apac
he
ssh
d
CIF
S
nfs
vsft
p
apac
he
ssh
d
Linux kernel Linux kernel
pool of public IPscontrolled by ctdb
30 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SoFS CIFS Enhancements
Clustering
- Multiple exports of the same file system over multiple nodes including distributed lock, share and lease support
- Failover capabilities on the server
- Integration with NFS, FTP, HTTP daemons in regard of locking, failover and authorization
Performance optimization with GPFS
NTFS ACL Support in Samba using the native GPFS NFSv4 ACL Support
HSM support within Samba to allow destaging of files to tape and user transparent recall.
VSS integration of GPFS snapshots
Registry based Samba configuration
Alternate Data Streams
Simple install and configuration tools
31 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Availability and Scaling Architecture
SoFS uses an active/active architecture
- All nodes are always up, running and serving files
- In case of a node failure the healthy nodes take over the load of the failed node
- Spare nodes improve overall performance during normal operation
Most NAS solutions use an active/passive architecture
- A spare node is sleeping until its partner fails. At that moment it starts up and takes over the load of the failed partner.
- Failures, misconfiguration, broken cables, etc. of the spare node might remain undiscovered until the partner node fails.
Node 1
Client
failover
Client Client Client
Node 2
...
Node 1
Client Client Client Client
Node 2
...
Active/Passive
Node 1
Client
...failover
Client Client Client
Node 2 Node n
...
Node 1
Client
...
Client Client Client
Node 2 Node n
...
Active/Active
32 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Availability vs. Disaster Recovery
Availability features in SoFS- Clustering: Protects against individual node failures
- RAID 0,1,5 & 6: Protects against individual disk failures
- Symmetric replication: Protects against storage subsystem failures
Disaster Recovery features in SoFS- Snapshots: Protects against accidental deletion or modification of files
- TSM backup: Protects against complete loss of file system or loss of individual files
- Asynchronous replication: Protects against complete loss of file system or individual files
33 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SoFS Management GUI
Installation wizards
- GPFS
- CIFS & CTDB
Cluster monitoring
- Node availability
- System utilization
- File system utilization
Self-service
- Create and manage project spaces
GPFS management
- Cluster configuration
- File systems
- Disks & NSDs
- Nodes
- Quotas
- Snapshots
- ILM: Storage Pools, Policies, Filesets
34 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Availability Management
35 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Central Event Monitoring System
36 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Central Event Notification System
37 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Node Management
38 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Filesystem Management
39 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Usage/Quota Management
40 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
System Utilization Reports
41 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Entry SoFS configuration
3 I/O Nodes, 1 Management Node
up to 48 TB usable capacity per DS3400
Streaming performance with large blocks up to 700 MB/s
Up to 7500 active concurrent users
DS 3400
EXP 3000
EXP 3000
EXP 3000
Midplane
SoF
S B
lade
So
FS
Blade
So
FS
Bla
de
HS
21 Blade
HS
21 Bla
de
HS
21 Bla
de
HS
21 Blade
HS
21 Blade
HS
21 Blade
HS
21 Blade
HS
21 Blade
HS
21 Blade
HS
21 Blade
Mgt. B
lade
AM
MA
MM
Ethernet Switch Module
Midplane
Ethernet Switch Module
FC Switch Module FC Switch Module
Clientnetwork
BladeC
ente
r H C
hassis
42 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Medium SoFS configuration
6 I/O Nodes, 1 Management Node
2*DS4700 with up to 132 TB usable capacity
Streaming performance with large blocks up to 1500 MB/s
Up to 15000 active concurrent users
MidplaneS
oFS
Blade
So
FS
Blade
So
FS
Bla
de
SoF
S B
lade
SoF
S B
lade
SoF
S B
lade
HS
21 Blade
HS
21 Blade
HS
21 Blade
HS
21 Blade
HS
21 Blade
HS
21 Blade
HS
21 Blade
Mgt. B
lade
AM
MA
MM
Ethernet Switch Module
Midplane
Ethernet Switch Module
Clientnetwork
BladeC
ente
r H C
hassis
DS 4700
EXP 810EXP 810
EXP 810
EXP 810EXP 810
EXP 810
DS 4700
EXP 810EXP 810
EXP 810
EXP 810EXP 810
EXP 810
FC Switch Module FC Switch Module
43 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Large SoFS configuration
13 I/O Nodes, 1 Management Node
4*DS4800 with up to 528 TB usable capacity
Streaming performance with large blocks up to 4000 MB/s
Up to 30000 active concurrent users
Midplane
SoF
S B
lade
So
FS
Blade
So
FS
Bla
de
SoF
S B
lade
SoF
S B
lade
SoF
S B
lade
SoF
S B
lade
SoF
S B
lade
SoF
S B
lade
SoF
S B
lade
SoF
S B
lade
SoF
S B
lade
SoF
S B
lade
Mgt. B
lade
AM
MA
MM
Ethernet Switch Module
Midplane
Ethernet Switch Module
Clientnetwork
BladeC
ente
r H C
hassis
FC Switch Module FC Switch Module
EXP 810
DS 4800
EXP 810EXP 810EXP 810
EXP 810EXP 810
EXP 810EXP 810EXP 810EXP 810
EXP 810EXP 810
EXP 810EXP 810
External FC Switch
EXP 810
DS 4800
EXP 810EXP 810EXP 810
EXP 810EXP 810
EXP 810EXP 810EXP 810EXP 810
EXP 810EXP 810
EXP 810EXP 810
...
44 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Minimalistic SoFS Configuration
3 Nodes
DS3200 direct SAS-attached
Up to 48 TB usable capacity per DS3200
Streaming performance with large blocks up to 700 MB/s
Up to 7500 active concurrent users
Ethernet SwitchEthernet Switch
Clientnetwork
DS 3200
x3650 x3650 x3650
EXP 3000
EXP 3000
EXP 3000
45 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SoFS BladeCenter Hardware
BladeCenter H chassis- Up to 6 Ethernet switch modules
• Either GbE or 10 GbE uplinks
- Two 4 Gb Fibre Channel switch modules
- Two management modules- Up to 14 HS-21 blades (one dedicated for management)
• Intel Xeon Quadcore, 8 GB RAM4x1Gb I/O+
1Gb GPFS +1Gb CTDB / Mgt
Midplane
HS
21 B
lade
HS
21 B
lad
e
HS
21 B
lade
HS
21 B
lade
HS
21 B
lade
HS
21 B
lad
eH
S21
Blad
e
HS
21 B
lade
HS
21 B
lade
HS
21 B
lade
HS
21 B
lade
HS
21 B
lad
eH
S21
Blad
e
HS
21 B
lade (M
gt)
AM
M2
AM
M2
Ethernet Switch Module
Midplane
10GbE Uplink SwMd
FC Switch Module FC Switch Module
2x4Gb = 8Gb
4x2x10Gb= 80Gb
Clientnetwork
2x6x4Gb = 48Gb
Storage
Blad
eCe
nter H
Ch
assis
Dedicated networks for GPFS and CTDB/mgt
46 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SoFS Rack mounted Hardware
X3650- 2 * Intel Xeon Quadcore
- 2 GigE Adapter On-board
- Up to 6* Ethernet or FC Adapter optional
- 4 - 48 GB RAM
47 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Why BladeCenter ? Management efficiency:
- All network interconnect (Ethernet & Fibre Channel) is hard-wired on the midplanes. No chance for wrong or missing cabling. (In a SoFS cluster with 14 blades the midplanes replace 112 network cables)
- Blades can be discovered, monitored, started, stopped etc. through BC management modules.
- Switches are an integrated part of the cluster and can be monitored through BC management modules.
Space efficiency: Fits 14 servers in an 9U package
Power efficiency: The BC power modules are up to 50% more efficient then smaller power supplies found in rack-mounted servers.
Scalability: Add up to 14 blades, the infrastructure (power, cooling, network, management) is already there.
Price: Initial investment looks expensive, but equipped with approximately 5 blades or more the TCO is lower than that of rack mounted servers.
48 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Backup, offline files & Tape
Full support of GPFS snapshots. Users can see existing snapshots in “Previous Version” dialogue of the Windows Explorer and restore them autonomously
SoFS supports LAN-free backup using the Tivoli Storage Manager 5.5
SoFS supports destaging of files to tape using the Tivoli Hierarchical Storage Manager 5.5
Customers have the choice of IBM’s premium LTO3 Tape Libraries
Full support of IBM’s tape encryption technology
TS3500 Tape Library
49 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
agenda
Conclusion5
The NAS Layer4
General Parallel File System (GPFS)3
The Scale-out File Services (SoFS) approach2
NAS today1
50 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
SoFS vs Classic NAS
SoFS combines a Clustered Filesystem with a Global Namespace. It‘s deployed centrally, managed centrally, backed up centrally and grown centrally.
SoFS scales horizontally not vertically. When you need more capacity, you just add more disks. When you need more performance you add nodes and/or disks. There are almost no architectural limits for future growth.
NAS boxes growth vertically. You can add controllers and storage up to the maximum configuration of the box. Beyond that you have to buy a new one.
SoFS provides integrated Information Lifecycle Management
- Different tiers of storage, e.g. FC, SATA and tape
- Policy driven placement and migration of files over their entire Lifetime
SoFS offers synchronous Block Level replication for data and Metadata
SoFS offers multi-site configurations via the cross-cluster mount functionality.
SoFS allows to build Highly Available Systems including Disaster resistant Systems across multiple sites
51 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Questions ?
52 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Special Notices
Copyright IBM Corporation, 2006
– This presentation was produced in the United States. IBM may not offer the products, programs, services or features discussed herein in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the products, programs, services, and features available in your area. Any reference to an IBM product, program, service or feature is not intended to state or imply that only IBM's product, program, service or feature may be used. Any functionally equivalent product, program, service or feature that does not infringe on any of IBM's intellectual property rights may be used instead of the IBM product, program, service or feature.
– The [e(logo)server] brand consists of the established IBM e-business logo followed by the descriptive term "server".
– Information in this presentation concerning non-IBM products was obtained from the suppliers of these products, published announcement material or other publicly available sources. Sources for non-IBM list prices and performance numbers are taken from publicly available information including D.H. Brown, vendor announcements, vendor WWW Home Pages, SPEC Home Page, GPC (Graphics Processing Council) Home Page and TPC (Transaction Processing Performance Council) Home Page. IBM has not tested these products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
– IBM may have patents or pending patent applications covering subject matter in this presentation. The furnishing of this presentation does not give you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY 10504-1785 USA.
– All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of a specific Statement of General Direction.
– The information contained in this presentation has not been submitted to any formal IBM test and is distributed "AS IS". While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that the same or similar results will be obtained elsewhere. The use of this information or the implementation of any techniques described herein is a customer responsibility and depends on the customer's ability to evaluate and integrate them into the customer's operational environment. Customers attempting to adapt these techniques to their own environments do so at their own risk.
– IBM is not responsible for printing errors in this presentation that result in pricing or information inaccuracies.
– The information contained in this presentation represents the current views of IBM on the issues discussed as of the date of publication. IBM cannot guarantee the accuracy of any information presented after the date of publication.
– All prices shown are IBM's suggested list prices; dealer prices may vary.
– IBM products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
– Information about non-IBM products was obtained from suppliers of those products. IBM makes no representations or warranties regarding these products. Non-IBM products are offered and warranted by third-parties, not IBM.
53 © Copyright IBM Corporation 2008
IBM Systems & Technology Group
Special Notices (contd.)
– Information provided in this presentation and information contained on IBM's past and present Year 2000 Internet Web site pages regarding products and services offered by IBM and its subsidiaries are "Year 2000 Readiness Disclosures" under the Year 2000 Information and Readiness Disclosure Act of 1998, a U.S statute enacted on October 19, 1998. IBM's Year 2000 Internet Web site pages have been and will continue to be our primary mechanism for communicating year 2000 information. Please see the "legal" icon on IBM's Year 2000 Web site (www.ibm.com/year2000) for further information regarding this statute and its applicability to IBM.
– Any performance data contained in this presentation was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements quoted in this presentation may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-available systems. Some measurements quoted in this presentation may have been estimated through extrapolation. Actual results may vary. Users of this presentation should verify the applicable data for their specific environment.
– The following terms are registered trademarks of International Business Machines Corporation in the United States and/or other countries: AIX, AIXwindows, AS/400, C Set++, CICS, CICS/6000, DataHub, DataJoiner, DB2, DEEP BLUE, DYNIX, DYNIX/ptx, e(logo), ESCON, IBM, IBM(logo), Information Warehouse, Intellistation, IQ-Link, LANStreamer, LoadLeveler, Magstar, MediaStreamer, Micro Channel, MQSeries, Net.Data, Netfinity, NUMA-Q, OS/2, OS/390, OS/400, Parallel Sysplex, PartnerLink, PartnerWorld, POWERparallel, PowerPC, PowerPC(logo), ptx/ADMIN, RISC System/6000, RS/6000, S/390, Scalable POWERparallel Systems, SecureWay, Sequent, SP2, System/390, The Engines of e-business, ThinkPad, Tivoli(logo), TURBOWAYS, VisualAge, WebSphere. The following terms are trademarks of International Business Machines Corporation in the United States and/or other countries: AIX/L, AIX/L(logo), AIX 5L, AIX PVMe, Application Region Manager, AS/400e, Blue Gene, Chipkill, ClusterProven, DB2 OLAP Server, DB2 Universal Database, e(logo)business, GigaProcessor, HACMP/6000, Intelligent Miner, iSeries, Network Station, NUMACenter, PowerPC Architecture, PowerPC 604, POWER2 Architecture, pSeries, Sequent (logo), SequentLINK, Service Director, Shark, SmoothStart, SP, Tivoli Enterprise, TME 10, Videocharger, Visualization Data Explorer, xSeries, zSeries. A full list of U.S. trademarks owned by IBM may be found at http://iplswww.nas.ibm.com/wpts/trademarks/trademar.htm.
– Lotus and Lotus Notes are registered trademarks and Domino and Notes are trademarks of Lotus Development Corporation in the United States and/or other countries.
– NetView, Tivoli and TME are registered trademarks and TME Enterprise is a trademark of Tivoli Systems, Inc. in the United States and/or other countries.
– Microsoft, Windows, Windows NT and the Windows logo are registered trademarks of Microsoft Corporation in the United States and/or other countries.
– UNIX is a registered trademark in the United States and other countries licensed exclusively through The Open Group.
– LINUX is a registered trademark of Linus Torvalds.– Intel and Pentium are registered trademarks and MMX, Itanium, Pentium II Xeon and Pentium III Xeon are trademarks of Intel Corporation in
the United States and/or other countries. – Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States and/or other countries. – Other company, product and service names may be trademarks or service marks of others.