Upload
solarisyougood
View
11
Download
0
Tags:
Embed Size (px)
Citation preview
© 2012 IBM Corporation
V6.4.0 Technical Update:SAN Volume Controller and Storwize V7000
Bill Wiegand - ATSConsulting I/T SpecialistStorage Virtualization
2 © Copyright IBM Corporation, 2011
Agenda
FCoE Support Non-Disruptive Volume Move Compression Overview Storwize V7000 Clustered System
– Unified Update
Miscellaneous
3 © Copyright IBM Corporation, 2011
Part of the T11 Technical committee Fibre Channel BB-5 project Not intended to displace or replace Fibre Channel and is not iSCSI Designed to enable convergence between Ethernet and Fibre
networks in the data center– Simplifies networking and reduces costs
Technically speaking, the FC0 and FC1 layers of Fibre Channel are replaced by a new, “Beefed-up” or “lossless” Ethernet
– Full duplex 802.3 Ethernet required
FCoE – Basics
4 © Copyright IBM Corporation, 2011
At a very high level – FCoE takes a normal FC frame and packages it within an Ethernet packet
Additional services (e.g. nameserver) are also provided by a Fibre Channel Forwarder (FCF) allowing interoperation with today’s Fibre Channel networks
Requires– Ethernet Jumbo Frames – 10Gb/s Ethernet only
FCoE – Basics
5 © Copyright IBM Corporation, 2011
FCoE – Topologies
FCoE can be routed by Ethernet switches on the same subnet supporting the protocol
Fibre Channel Forwarders (FCF's) perform switching onto Fibre Channel fabrics
6 © Copyright IBM Corporation, 2011
FCoE – SVC/Storwize V7000 Support
With V6.4 the following hardware will support FCoE:– The SVC model 2145-CG8 nodes will support FCoE if the optional 10
Gb/s Ethernet/CNA adapter is installed
– The Storwize V7000 2076-3xx model control enclosures
Both SVC and Storwize V7000 systems can be non-disruptively upgraded to support FCoE
There are two FCoE ports per node or node canister
7 © Copyright IBM Corporation, 2011
FCoE – Interoperability
The SVC & Storwize V7000 will support attaching to all existing Fibre Channel hosts, storage and each other via the FCoE ports on the nodes
Additional support for native FCoE hosts and controllers will be added over time
SVC Stretch cluster supports use of the FCoE ports
iSCSI and FCoE can be used on the same 10Gb/s ports at the same time if required
8 © Copyright IBM Corporation, 2011
FCoE – DCBx Configuration
The SVC and Storwize V7000 10Gb/s Ethernet ports will use the following classes of service:
– NIC Class will carry iSCSI traffic
– FCoE Class will carry FCoE traffic
– The iSCSI Class is not currently being used but may be used at some point in the future
9 © Copyright IBM Corporation, 2011
FCoE – Configuration Rules
VLAN Tagging is not supported
The FCF and the 10 Gb/s ports MUST be on the same VLAN for it to be a supported configuration
A single FCoE port is not able to discover multiple FCFs
– If multiple FCFs are discovered then the system will use the first one in the list
• Which may not be the one that the customer wants to use
10 © Copyright IBM Corporation, 2011
FCoE – WWPN Changes
Each Hardware platform has a range of WWPNs associated to it:– SVC: 5005076801xxxxxx
– Storwize V7000: 5005076802xxxxxx
When a customer accepts a new Hardware Configuration using the “variable hardware” technology, then all WWPNs will be re-allocated
– In most cases this won’t happen but in future configurations it will become more likely
WWPNs are assigned in the following order from within the assigned range:
1.Fibre Channel
2.FCoE
3.SAS
4.Other internal WWPNs
11 © Copyright IBM Corporation, 2011
SVC shares all of the FC WWPNs between the FC and FCoE physical ports
–1 WWPN per 10GbE port
–Maximum 6 WWPNs per node (4 FC and 2 FCoE)
–2x10Gb != 4x8Gb
• Full migration from FC to FCoE only needs to take this into account
The new “lsportfc” command will provide details of the WWPNs in the system
FCoE – Interface Changes
12 © Copyright IBM Corporation, 2011
View: lsportfc– Captures current port status for FCoE/FC ports on the system
– Similar to lsportip for iSCSI
IBM_2076:cluster:superuser>lsportfcid fc_io_port_id port_id type port_speed node_id node_name WWPN nportid status0 1 1 fc 4Gb 23 tb28-0-1 500507680110497E 02E100 active1 2 2 fc 4Gb 23 tb28-0-1 500507680120497E 02E000 active2 3 3 fc 4Gb 23 tb28-0-1 500507680130497E 043E00 active3 4 4 fc 4Gb 23 tb28-0-1 500507680140497E 04BE00 active4 5 3 ethernet 10Gb 23 tb28-0-1 500507680150497E 040C0F active5 6 4 ethernet 10Gb 23 tb28-0-1 500507680160497E 021003 active
IBM_2076:tbcluster-28:superuser>lsportfc 4id 4fc_io_port_id 5port_id 3type ethernetport_speed 10Gbnode_id 23node_name tb28-0-1WWPN 500507680150497Enportid 040C0Fstatus activeswitch_WWPN 100000051E07F464fpma 0E:FC:00:04:0C:0Fvlanid 100fcf_MAC 00:05:73:C2:CA:F0
FCoE – Interface Changes
13 © Copyright IBM Corporation, 2011
V6.4 provides both FCoE Target and Initiator functionsThe SVC/Storwize V7000 FCoE interface can be used for the
following functionality:–FC Host access to a Volume (via either FC or FCoE ports)
–FCoE Host access to a Volume (via either FC or FCoE ports)
–SVC/Storwize V7000 access (via either FC or FCoE ports) to an external storage system FC accessed LUN
–SVC/Storwize V7000 access (via either FC or FCoE ports) to an external storage system FCoE accessed LUN
–SVC/Storwize V7000 to another SVC/Storwize V7000 via any combination of FC and FCoE
• Can dedicate FCoE ports for replication or use for host/storage access to allow dedicating two FC ports for replication or direct connection of server HBA ports
FCoE – Support
14 © Copyright IBM Corporation, 2011
CAUTION
3
1
4
2
Disconnect allsupply power forcomplete isolation
Disconnect allsupply power forcomplete isolation
4
2
3
1
CAUTI O N
3
1
4
2
4
2
3
1
CAUTION
Disconnect allsupply power forcomplete isolation
CAUTI O N
Disconnect allsupply power forcomplete isolation
2
3
1
4
21
1 2
2 1
1
3
2
412
2 14 3 2 14 3
21 4321 43
12
1 2
LNK
Tx/Rx
10Gb/s
LNK
Tx/Rx
10Gb/s
Storw ize V7000
Converged Switch B32
4 5 6 70 2 31 12 13 14 158 10 119 20 21 22 2316 18 1917 4 5 6 70 2 31Gbe
Converged Switch B32
CAUTION
3
1
4
2
Disconnect allsupply power forcomplete isolation
Disconnect allsupply power forcomplete isolation
4
2
3
1
CAUTI O N
3
1
4
2
4
2
3
1
CAUTION
Disconnect allsupply power forcomplete isolation
CAUTI O N
Disconnect allsupply power forcomplete isolation
2
3
1
4
21
1 2
2 1
1
3
2
412
2 14 3 2 14 3
21 4321 43
12
1 2
LNK
Tx/Rx
10Gb/s
LNK
Tx/Rx
10Gb/s
Converged Switch B32
4 5 6 70 2 31 12 13 14 158 10 119 20 21 22 2316 18 1917 4 5 6 70 2 31Gbe
19 2318 2217 2116201115913 10 148 123 71 5 26402498-B24
SAN24B-419 2318 2217 2116201115913 10 148 123 71 5 26402498-B24F abric
V6.4 supports remote copy/replication via FCoE–Requires use of FCF and a full FC ISL
• Could use FCIP and routers as we do today–DWDM based links are supported as well–FCoE is not iSCSI, currently no native IP replication capability
All current bandwidth sizing and SVC/Storwize V7000 system sizing and planning for replication applies
FCoE – Support
15 © Copyright IBM Corporation, 2011
FCoE – Resources
FCIA Guide:– http://www.fibrechannel.org/documents/doc_download/1-fcia-solution-
guide
IBM Red Paper:– http://www.redbooks.ibm.com/redpapers/pdfs/redp4493.pdf
17 © Copyright IBM Corporation, 2011
Non-Disruptive Volume Move Across I/O Groups
What this is: Allows SVC customers to move a Volume assigned
to one I/O group over to another I/O group without disruption to the I/O between the Volume and the Host
Non-disruptive movement of the Volume requires interaction with the host and its multi-pathing software to ensure paths are active and available during the move
Why it matters: Growing virtualization environments or
performance considerations require movement of Volumes to other I/O groups better equipped to meet the customer’s requirements
I/O group 1
2. VolumeMove
HostI/O
SVC node
SVC node
SVC node
SVC node
I/O group 0
1. Multi-path I/O to Volumes
3. Active paths to relocated Volumes
18 © Copyright IBM Corporation, 2011
Non-Disruptive Volume Move Across I/O Groups
I/O group 1
2. VolumeMove
HostI/O
SVC node
SVC node
SVC node
SVC node
I/O group 0
1. Multi-path I/O to Volumes
3. Active paths to relocated Volumes
How it works: The Volume belongs to a single I/O Group,
referred to as the “caching I/O group”, and all I/O is sent to the nodes in that I/O group
The Volume is made accessible through one or more additional I/O groups referred to as “access I/O groups”
– Any host I/O which is sent to the access I/O group will be forwarded back to the caching I/O group
The “caching I/O group” is switched to the desired I/O group
– Using a new command called “movevdisk”– Host I/O to the original I/O group is now forwarded
to the new I/O group The host multi-pathing drivers are now
reconfigured to discover the additional paths to the Volume on the new I/O group
– Some zoning changes may also be required Once the multi-pathing drivers have
discovered the new paths, access to the Volume through the original I/O group can be unconfigured
– The multi-pathing drivers can now be reconfigured a second time to remove the now dead paths
24 © Copyright IBM Corporation, 2011
NDVM – Details
A Volume which is in a Metro or Global Mirror relationship cannot change it’s “caching I/O group” currently
If a Volume in a FlashCopy mapping is moved, the “bitmaps” are left in the original I/O group
– This will cause additional inter-node messaging to allow FlashCopy to operate The SCSI ID of the Volume to host mapping will usually change during
this procedure and it is not currently possible to select the new SCSI ID– If the Volume is mapped to multiple servers, then the LUN may use different SCSI
IDs for each host The maximum number of paths per Volume (8) has not changed
– Customers who are already using 8 paths per Volume will not be able to use NVDM because NVDM requires adding paths
If the caching I/O group fails for any reason, the Volume will go offline– Even if access I/O groups are configured
This function can be used to change the preferred node for a Volume to a different node in the same I/O group by first moving the Volume to a second I/O group and then moving it back again
– System allows for selection of which node you want to use as the preferred node when moving the Volume back to the original I/O group
– For Storwize V7000 this would require a clustered system configuration– Note: The multi-pathing driver may not detect the change without a reboot
25 © Copyright IBM Corporation, 2011
NDVM – Host Support/Restrictions
At initial launch the following restrictions are likely to be in force– No iSCSI Host support
– No support for Host based clustering• MSCS, VMWare Cluster, HACMP, etc.
There will be restrictions on what operating systems are supported at GA
– Currently supported• SLES 11 • RHEL 6.1 (probably 6.2 and 6.3 as well)
– Should be supported at or shortly after GA• AIX (SDD Fix required, round robins I/Os unless rebooted)• VMWare without VAAI Support (awaiting test)• VMWare with VAAI (needs SVC code changes)• W2K8 (SDD Fix required, can’t delete old paths)
Review the support matrix at GA on June 15th for official support status
27 © Copyright IBM Corporation, 2011
Real-time Compression – Basics
Compression is an alternative to Thin Provisioning– They both allow you to use less physical space on disk than is presented to
the host A Compressed Volume is “a kind of” Thin Provisioning
– Only uses physical storage to store compressed data– Volume can be built from a pool using internal or external MDisks
Compression requires the I/O group hardware be one of the following platforms
– SVC Model 2145-CF8/CG8 Nodes– Storwize V7000 Model 2076-1xx/3xx Control Enclosure
Can use Volume mirroring to convert to a Compressed Volume
28 © Copyright IBM Corporation, 2011
Real-time Compression – Basics
Maximum of 200 Compressed Volumes per I/O group will initially be supported
Licensing is as follows:– For SVC it is per TB of Volume capacity as seen by a host
• Need fifty 100GB Compressed Volumes so need 5TB license
– For Storwize V7000 it is per enclosure• E.g. Customer has 4 enclosure system and is virtualizing an external disk
system with 2 enclosures they would require 6 enclosure license Note: Creating the first Compressed Volume in an I/O
group will instantly dedicate CPU and memory resources from the nodes/node canisters in that I/O group to the compression engine
– So planning/sizing should be done before implementing in a production environment
More detail on this and how compression works will be provided on the June 13th call tomorrow
29 © Copyright IBM Corporation, 2011
Clien ts
SVC S /W C om ponent
RAC E S /W C om ponent
F ro nt E nd
R e m o te C o p y
C ac he
F las h C o p y
Mirro ring
T hin P ro vis io ning
V irtualizatio n
Storag e
B ack E nd
R andom AccessC ompression
Engine™
All copy services will interoperate with compressed Volumes
– All copy services will be working with uncompressed data
• No real changes in sizing and planning for FlashCopy or replication
– Bandwidth sizing for replication same for compressed/non-compressed Volumes
– Compression engine resources allocated per I/O group need considered in sizing
All Thin Provisioning properties apply to compressed Volumes
– Virtual capacity, real capacity, used capacity, etc.
New property introduced– Uncompressed capacity
• Provides an indication of how much uncompressed data has been written to the Volume
Real-time Compression – Basics
30 © Copyright IBM Corporation, 2011
Real-time Compression – GUI Support
GUI Displays Compression Savings on a Volume, Pool and System basis:
31 © Copyright IBM Corporation, 2011
Real-time Compression – GUI Support
GUI Performance panel shows separate CPU utilization for Compression and System workloads
32 © Copyright IBM Corporation, 2011
Real-time Compression – Sizing Tools
The following tools will be available to support customers deploying Compression
– Disk Magic
• Will ask the user to provide an “Effectiveness” value (similar to Easy Tier)– Available later this year
– Capacity Magic
• Will ask the user to provide a compression ratio to complete the sizing
– Comprestimator
• A tool to estimate the compression ratio which is achievable for a given set of data
• Loaded on customer’s hosts
33 © Copyright IBM Corporation, 2011
Real-time Compression – 45 Day Trial License
45 Day Free Trial License of Compression Function– Included in software so simply activate using the GUI by setting to
something other then zero to avoid errors in event log
35 © Copyright IBM Corporation, 2011
Scale the Storwize V7000 Multiple Ways An I/O Group is a control
enclosure and its associated SAS attached expansion enclosures
Clustered system can consist of 2-4 I/O Groups
Scale capacity/throughput 4x– Up to 1.4PB raw capacity or 960
drives in two 42U racks
Non-disruptive upgrades– From smallest to largest
configurations
– Purchase hardware only when you need it
• No extra feature to order and no extra charge for a clustered system
• Configure one system using USB stick and then add second using GUI
Virtualize storage arrays behind Storwize V7000 for even greater capacity and throughput
Exp
and
Cluster
ClusterControl Enclosure Control Enclosure Control Enclosure
Expansion Enclosures
Expansion Enclosures
Expansion Enclosures
Storwize V7000One I/O Group
System
Storwize V70002-4 I/O Groups
Clustered System
An I/O Group is a control enclosure and its associated
SAS connected expansion enclosures
Exp
and
No interconnection of SAS chains between control enclosures as control enclosures communicate via FC and must use all 8 FC ports on enclosures
NOTE: No SCORE/RPQ required
36 © Copyright IBM Corporation, 2011
Storwize V7000 Unified Scaling Unchanged
Storwize V7000 Unified can scale disk capacity by adding up to nine expansion enclosures to the standard control enclosure
Virtualize external storage arrays behind Storwize V7000 Unified for even greater with externally virtualized capacity
– CIFS not supported currently with externally virtualized storage
CAN NOT horizontally scale out by adding another Storwize V7000 control enclosure and associated expansion enclosures
Nor an additional Unified system
– If customer has clustered Storwize V7000 system today they will not be able to upgrade to Unified system when MES is available until we support this in a future release
V6.4 won’t be picked up by Unified so won’t currently benefit from new functions discussed today
Control Enclosure
Expansion Enclosures
Storwize V7000 Unified2-4 I/O Groups
Clustered System NOT SUPPORTED
Storwize V7000 Unified
One I/O Group System
Control Enclosure
Expansion EnclosuresExpansion Enclosures
Exp
and
An I/O Group is a control enclosure and its associated
SAS connected expansion enclosures
Control Enclosure
37 © Copyright IBM Corporation, 2011
Storwize V7000 – Pre-V6.4 behavior
All cabling shown is logical
SAN
I/O Group 1I/O Group 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Control Enclosure #2
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Control Enclosure #1
Storage Pool B
MDisk MDiskMDisk MDisk
Storage Pool A
MDisk MDisk
Storage Pool C
MDisk MDisk
Node Canister Node Canister Node Canister Node Canister
MDiskMDisk
Volumes assigned to I/O Group that owns most MDisks in pool
Volumes assigned to I/O Group that owns most MDisks in pool
Default behavior is a storage pool per I/O Group per drive class
Default behavior is a storage pool per I/O Group per drive class
Volumes assigned to I/O Group 0 if pool has equal # of MDisks from each I/O Group
• Expansion enclosures are connected through one control enclosure and can be part of only one I/O group
• Storage pools can contain MDisks from more than one I/O group
• Inter-control enclosure communications happens over the SAN
• All MDisks are accessed via owning I/O group
• A Volume is serviced by only one I/O group
38 © Copyright IBM Corporation, 2011
Storwize V7000 – V6.4 and later behavior
• Expansion enclosures are connected through one control enclosure and can be part of only one I/O group
• Storage pools can contain MDisks from more than one I/O group
• Inter-control enclosure communications happens over the SAN
• All MDisks are accessed via owning I/O group
• A Volume is serviced by only one I/O group
All cabling shown is logical
SAN
I/O Group 1I/O Group 0
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Control Enclosure #2
Expansion Enclosure
Expansion Enclosure
Expansion Enclosure
Control Enclosure #1
Storage Pool B
MDisk MDiskMDisk MDisk
Storage Pool A
MDisk MDisk
Storage Pool C
MDisk MDisk
Node Canister Node Canister Node Canister Node Canister
MDiskMDisk
Default behavior is a storage pool per I/O Group per drive class
Default behavior is a storage pool per I/O Group per drive class
Volume ownership balanced across node canisters in all I/O Groups when pool contains MDisks from multiple I/O Groups
40 © Copyright IBM Corporation, 2011
SVC to Storwize V7000 Remote Copy
When V6.3 GA’d we provided the ability to replicate between SVC and Storwize V7000 systems
V6.3 introduced a new cluster property called “layer”– SVC is always in “replication layer” mode
– Storwize V7000 is either in “replication layer” mode or “storage layer” mode
• Storwize V7000 is in “storage layer” mode by default
• Switch to “replication layer” using “svctask chcluster -layer replication”– Can only be changed via CLI
“Replication layer” clusters can use storage layer clusters as storage systems to virtualize
– With V6.4 you can now virtualize a Storwize V7000 with layer=storage behind another Storwize V7000 with layer=replication
41 © Copyright IBM Corporation, 2011
Remote Copy – Configuration Example
SVC V6.3.xCluster B
SWV7K V6.4.xCluster C Layer = replication
SWV7K V6.4.xCluster D Layer = storage
SVC V6.4.xCluster A
Replication layer
Storage layer
RC_partnership_1 RC_partnership_2
SWV7K V6.xCluster E Layer = storage
RC_partnership_1
NOTE: To provision SWV7K storage to another SWV7K with layer=replication requires that both SWV7Ks be running V6.4 or later software
43 © Copyright IBM Corporation, 2011
Miscellaneous
Space Efficient Volume grain size Due to performance considerations and interaction with Easy Tier, the
default grain size of a Thin Provisioned Volume has been changed to 256KB rather than 32KB
– Also helps avoid I/Os up to 256K from host not being decomposed into smaller I/Os to MDisks
SCSI-3 Persistent Reserve This release extends the existing persistent reservation support to add
additional persistent reserve functions– PR reservation type “Write Exclusive All Registrants”
– PR reservation type “Exclusive Access All Registrants”
– Report capabilities service action of the “Persistent Reserve In” command
These additional persistent reserve functions will allow GPFS to use persistent reserves on a Storwize V7000 or SVC system
44 © Copyright IBM Corporation, 2011
Miscellaneous
V6.4 will support direct attached hosts via FC with SCORE/RPQ only– Full support to be added in later release
Requires changes to host properties– In the current release all direct attach host status will report as “degraded”
– When fully support direct attached host status will report as “active/inactive”
– Will be status of “offline” if not connected
The status field will be “online” if the host has an active login in each I/O group where it can see Volumes mapped to it
Direct Attach hosts can only use FC ports that are not required for intra-cluster connectivity or SAN use for hosts, disk or replication
– In a single control enclosure Storwize V7000 there will be 8 ports available
– A clustered Storwize V7000 will not currently support direct attach FC hosts
The view “lsportfc” will report direct or fabric attachment of the port No changes to “lshbaportcandidate, mkhost or addhostport” cmds
45 © Copyright IBM Corporation, 2011
Miscellaneous
FlashCopy GUI panel now displays timestamp showing when mapping was started
47 © Copyright IBM Corporation, 2011
Miscellaneous
SVC and Storwize V7000 software upgrade has a new “prepare” phase whenever upgrading from V6.4 to a later release
– Initially this will not do anything but is part of future plans related to the cache architecture
– We have introduced new CCU states:• Preparing, prepared, prepare_failed• For information only as you will see these possibly and again for now you can ignore
them New quorum scanning design to try and recover from corrupt
quorum data caused by drive faults– Quorum will regularly be read and validated
– Invalid quorum will ideally be moved to a new device
– If no new device available, quorum will be re-written Software upgrade package size increasing to about 500MB from
about 340MB TPC stats collection for internal MDisks will show a response time
in V6.3.0.2 and later
58 © Copyright IBM Corporation, 2011
The following terms are trademarks of International Business Machines Corporation in the United States, other countries, or both:IBM, IBM Logo, on demand business logo, Enterprise Storage Server, xSeries, BladeCenter, eServer, ServeRAID andFlashCopy, System Storage, Tivoli, Easy Tier, Active Cloud EngineThe following are trademarks or registered trademarks of other companies.Intel is a trademark of the Intel Corporation in the United States and other countries.Java and all Java-related trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc., in the United States and other countries.Lotus, Notes, and Domino are trademarks or registered trademarks of Lotus Development Corporation.Linux is a registered trademark of Linus Torvalds.Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation.SET and Secure Electronic Transaction are trademarks owned by SET Secure Electronic Transaction LLC.UNIX is a registered trademark of The Open Group in the United States and other countries.Storwize and the Storwize logo are trademarks or registered trademarks of Storwize Inc., an IBM Company.
* All other products may be trademarks or registered trademarks of their respective companies.
Notes:Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here.
IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.
All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.
This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area.
The information on the new products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information on the new products is for informational purposes only and may not be incorporated into any contract. The information on the new products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. The development, release, and timing of any features or functionality described for our products remains at our sole discretion.
All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.
Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.
Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.
This presentation and the claims outlined in it were reviewed for compliance with US law. Adaptations of these claims for use in other geographies must be reviewed by the local country counsel for compliance with local laws.
Legal Information and Trademarks