35
© 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

Embed Size (px)

Citation preview

Page 1: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM i and Open Storage 4Q 2009 Update

IBM Power Systems

Mike Schambureck

Vess Natchev

IBM Lab Services, Rochester

Page 2: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

2

Agenda

October 2009 enhancements

– DS5100 and DS5300 direct attachment to IBM i

– NPIV support for DS8000 and Fibre Channel (FC) tape libraries

– Redundant VIOS support with client-side MPIO

– End-to-end device mapping

– SSD analyzer improvements

Previous 2009 enhancements

– XIV support

– VIOS configuration in HMC

– Active Memory Sharing (AMS)

– SAN support summary with direct attachment

– SAN support summary with VIOS

– Let IBM Lab Services help you with IBM i and open storage!

Page 3: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

3

IBM i Direct Support for DS5100 and DS5300 Overview

IBM i supports direct attachment to DS5100 and DS5300 storage to enable simpler SAN planning and leverage midrange open storage

DS5100 and DS5300 benefits– Multiple RAID levels, including RAID 6– Custom XOR processor for RAID calculations– Consolidated storage for IBM i, Unix, Linux, Windows applications– Can use FC or SATA drives

DS5100 and DS5300 currently supported

New support does not require PowerVM Virtual I/O Server (VIOS)– PowerVM may be required for shared processors, Active Memory Sharing

IBM i LPAR owns Fibre Channel adapter(s)– Two adapters required for redundancy

New IBM i host kit required on DS5100 and DS5300– Feature code #7735

IBM i supports Multi-path I/O with direct attachment to DS5100 and DS5300– No additional software required on DS5100 and DS5300– Round-robin algorithm used with Multi-path I/O in IBM i

Power Hypervisor

* All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Page 4: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

4

Requirements, Best Practices and Limitations Hardware requirements:

– DS5100 or DS5300– Power servers (Power blades not supported for direct attachment)– POWER6 hardware– POWER5(+) hardware not supported– Smart I/O adapter (IOPless adapter)

#5735 (8Gb dual-port PCI-E)#5774 (4Gb dual-port PCI-E)#5749 (4Gb dual-port PCI-X)

Software requirements:– IBM i 6.1 with IBM 6.1.1 LIC refresh– IBM i 5.4.5 or earlier not supported– 7.60.26.00 or later DS5100 or DS5300 controller firmware

Best Practices and limitations:– Maximum LUN size for IBM i is < 2TB (volume size >= 2TB will not work)– Mixing FC and SATA drives within enclosure is supported– FC drives recommended for production workloads– Maximum 64 LUNs per FC port (same as DS8000 direct attach)– SSDs not supported at this time– Drive encryption in DS5100/DS5300 is supported– Boot from SAN requires active connection, IBM i will not IPL from passive connection– Maximum of 300 LUNs from a single DS5100/DS5300 per ASP add

Adding more LUNs to same ASP requires multiple ASP adds– All LUNs must be protected (RAID0 arrays for LUNs not supported)– Dynamic volume expansion on DS5100 and DS5300 is not supported– IBM i system-level disk mirroring not supported

Disk protection configured on DS5100 or DS5300 Drives report in IBM i as protected (same as DS8000 protected LUNs)

Page 5: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

5

Configuring DS5100 and DS5300 for IBM i Direct Attachment – Storage

Step 1: Perform sizing– Use Disk Magic for DDM/array sizing, use PCRM* for LUN sizing– Number of physical drives is still most important– Separate LUNs for production and development workloads on different RAID arrays– Array and LUN configuration for IBM i is different from that for AIX direct or for AIX through VIOS– Leverage service vouchers for assistance (see later in this presentation)

Step 2: Use DS5100 and DS5300 Redbook (SG-247676) to perform initial DS5100 and DS5300 setup

Step 3: Create RAID5, RAID6 or RAID10 array

Step 4: Create 512B LUNs based on Disk Magic sizing

Step 5: Create new host, type IBM i

Step 6: Map LUNs to new host

Make sure there are no hosts in default host group on DS5100/DS5300– Storage partitioning recommended– IBM i will see LUNs attached to any host in the default host group in addition to its own hosts

without storage partitioning

DS Storage Manager

* http://www.ibm.com/systems/i/advantages/perfmgmt/resource.html

Page 6: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

6

Configuring DS5100 and DS5300 for IBM i Direct Attachment – Power System

Create IBM i LPAR, assign supported FC adapter(s), OR

Assign supported FC adapters to existing IBM i LPAR

Perform zoning

– Zone each port on the FC adapter to connect to controller A or B

– Maximizes redundancy and performance

Connect FC adapters to SAN fabric

– See note on default host group on DS5100/DS5300 before performing this step

Page 7: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

7

Comparison with DS5100 and DS5300 Attachment to IBM i through VIOS

Performance:– Performance is expected to be similar in both environments– Direct attachment will not provide better performance– VIOS does not add significant overhead

Skills:– DS5100 and DS5300 and IBM i skills required– VIOS skills not required

Copy Services and replication:– DS5100 and DS5300 Flash Copy, Metro Mirror and Global Mirror supported for IBM i

attachment through VIOSFull-system only

IASPs not supported

– IBM is investigating supporting Copy Services with direct attachment– PowerHA for i Geographic Mirroring is supported in both environments

Page 8: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

8

NPIV Support for DS8000 and FC Tape Libraries

EMC

IBM i

VIOS

FC HBAs

DS5000

generic scsi disk

generic scsi disk

SVC

VIOS

FC HBAs

SAN

IBM i

FCPVIOS

FC HBAs

DS8000

VIOS

FC HBAs

SAN

DS8000

SCSI

Multiple storage subsystems supported* Storage assigned to VIOS first, then virtualized

to IBM i Can be configured with HMC or IVM

DS8000 and certain FC tape libraries supported Storage mapped directly to Virtual FC adapter in

IBM i, which uses N_Port on FC adapter in VIOS Uses VIOS in passthrough mode

* See section 7 of the IBM i Virtualization and Open Storage Readme: http://www.ibm.com/systems/i/os

VSCSI NPIV

Page 9: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

9

NPIV Benefits for IBM i

Sharing dual-port 8Gb FC adapters among multiple partitions– One port from 2 separate adapters can be shared to same client partition, allowing for

adapter redundancy

Support for FC tape libraries in VIOS environment– Tape library recognized with device-specific machine-type and model (MTM) in IBM i and

all library functions supported

Support for DS8000 storage– Volumes recognized with device-specific MTM and as protected or unprotected

Support for DS8000 Copy Services with PowerHA in a VIOS environment– Similar to natively attached DS8000 today

Page 10: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

10

Configure Disk and Tape Virtualization with NPIV

EMC

IBM i

FCPVIOS

FC HBAs

DS8000

VIOS

FC HBAs

SAN

DS8000

1

Step 1: configure virtual and physical FC adapters

Step 2: configure SAN fabric and storage/tape

2

Page 11: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

11

Use HMC to create Virtual FC (VFC) server and client adapters

1VIOS IBM i

Configure Disk and Tape Virtualization with NPIV

Page 12: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

12

1

Use HMC to assign IBM i LPAR and VFC adapter pair to physical FC port

Configure Disk and Tape Virtualization with NPIV

Page 13: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

13

2

Use DS8000 or tape library UI and Redbook to assign LUNs or tape drives to the WWPN from the VFC client adapter in i LPAR

Configure Disk and Tape Virtualization with NPIV

Page 14: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

14

Redundant VIOS Support with Client-side MPIO

Redundant VIOS Support– Enables IBM i 6.1 partitions to utilize redundant VIOS partitions with 1 set of

storage LUNs– Improves availability of virtualized environment– Can assign two paging VIOS partitions to a shared memory pool to provide

redundant access to the paging devices.– Available on Power servers only, not on Power blades

Power Hypervisor

VIOS VIOS

Page 15: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

15

Host group definition on SAN

Host groups are used to share access to the same logical drives between VIOS partitions

And repeat for the second VIOS host

Page 16: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

16

Page 17: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

17

End-to-end Device Mapping

In HMC: Server->Partitions->VIOS LPAR->Hardware Information-> Virtual I/O Adapters->SCSI

Allows direct mapping of LUNs attached to VIOS (hdiskX) to disk units in IBM i (DD0xx) Facilitates configuration changes and troubleshooting

Page 18: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

18

Enhanced Leadership Support for SSDs IBM i 5.4 and 6.1 Storage Manager can be

used to maximize the impact of high performing SSDs New trace and balance commands can

move hot data to SSDs High priority object types are

automatically placed on SSD drives DB2 Objects have new parameter and

can be placed on SSD drives

IBM i advanced support for SSDs supports

– SSDs installed in I/O drawers – SSDs installed in SANs*– SSDs installed in behind VIOS*

New SSD Analyzer Tool #– Designed to help determine if SSDs can help

improve application performance – Runs on customer’s IBM i 5.4 or 6.1 system– Up to 40% increase in throughput

implementing IBM invented skip-read-write technology

NEW!

Batch Performance Runs

0

1

2

3

4

5

Ho

urs

72 Drives 72 Drives + 8 SSD 60 Drives + 4 SSD

40% Reduction

Associated Bank Reduces Batch Run Time by 40% with SSDs

NEW!

NEW!

*Requires IBM i 6.1.1 # Download http://www.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3780

NEW!

Page 19: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

19

SSD Analyzer Tool

SSD ANALYZER TOOL (ANZSSDDTA) Type choices, press Enter. PERFORMANCE MEMBER . . . . . . . *DEFAULT Name, *DEFAULT LIBRARY . . . . . . . . . . . Name Additional Parameters REPORT TYPE . . . . . . . . . . *SUMMARY *DETAIL, *SUMMARY, *BOTH TIME PERIOD:: START TIME AND DATE:: BEGINNING TIME . . . . . . . . *AVAIL Time, *AVAIL BEGINNING DATE . . . . . . . . *BEGIN Date, *BEGIN END TIME AND DATE:: ENDING TIME . . . . . . . . . *AVAIL Time, *AVAIL ENDING DATE . . . . . . . . . *END Date, *END DETAIL REPORT SORT ORDER . . . . *DISKTOT *JOBNAME, *CPUTOT... NUMBER OF RECORDS IN REPORT . . 50 0 - 9999, *ALL Bottom F3=Exit F4=Prompt F5=Refresh F12=Cancel F13=How to use this display F24=More keys

Available via www.ibm.com/support/techdocs in Presentations & tools. Search using keyword SSD

Page 20: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

20

IBM i Support for XIV

IBM i supports the XIV Storage System with PowerVM VIOS to leverage the innovative subsystem’s ease of management and performance

XIV benefits– Massive parallelism (all disk units used at all times)– Easy upgradeability– Simplified configuration– Unlimited snapshots with little effect on performance

VIOS can virtualize XIV storage to IBM i, AIX, Linux – PowerVM VIOS configurations with IBM i 6.1 partitions– POWER6 processor-based servers and blades

XIV performance for IBM i– Performance falls between DS5000 and DS8000,

closer to DS5000– Performance requirements for all workloads moving to

XIV should be carefully evaluated to verify the storage will be able to deliver adequate performance for IBM i

Power Hypervisor

VIOS

Page 21: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

21

VIOS Configuration: HMC System1) Assign volumes to FC adapters in VIOS using WWPNs2) In HMC: Server->Configuration->Virtual Resources->Virtual Storage Management

No need to use VIOS command line Assign volume to correct IBM i LPAR Volume then becomes available to IBM i as DDxx

Page 22: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

22

VIOS Configuration: IVM System1) Assign volumes to FC adapters in VIOS using WWPNs2) In IVM: View/Modify Virtual Storage Physical Volumes

No need to use VIOS command line Assign volume to correct IBM i LPAR Volume then becomes available to IBM i as DDxx

Page 23: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

23

Innovative solution for more efficient utilization of memory resources

Available with PowerVM Enterprise Edition– No additional cost

System requirements:– IBM Power Systems server or blade with POWER6 processors– Virtual I/O Server (VIOS) 2.1.1– Firmware level: eFW 3.4.2– HMC v7.342

Active memory Sharing is optionally configurable on a partition basis– Shared Memory Partitions must be defined in shared processor mode

– Shared Memory Partitions must be “pure virtual”, ie all I/O devices are accessed via the VIOS

Virtual SCSI Client Adapters

Virtual Ethernet Adapters

Virtual Serial Server Adapters

Operating systems supported:– AIX 6.1 TL3– IBM i 6.1 plus PTFs– SUSE Linux Enterprise Server 11

PowerVM active memory sharing (AMS)

0

5

10

15#10

#9

#8

#7

#6

#5

#4

#3

#2

#1

Mem

ory

Usa

ge

(GB

)

TimeWorkloads

White paper on AMS: http://www.ibm.com/systems/power/software/virtualization/whitepapers/ams.html

Page 24: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

24

IVM example: working with AMS – define a pool

Page 25: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

25

IVM example: working with AMS – pool definition

Page 26: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

26

IVM example: working with AMS – partition properties

Page 27: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

27

HMC example: working with AMS – define a pool

Select the System i to work with -> Tasks -> Virtual Resources -> Configuration -> Shared memory pool management

Page 28: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

28

HMC example: working with AMS – define a pool

Page 29: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

29

HMC example: working with AMS – partition properties

Page 30: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

30

IBM Systems Storage Support for IBM iDirect Attach

DS5100 DS5300 DS6800 DS8100 DS8300 DS8700

Systems POWER6* POWER6* POWER5/6 POWER5/6* POWER5/6* POWER5/6*

Ports (max) Fibre -16 Fibre -16 Fibre - 8 Fibre - 64 Fibre - 128 Fibre - 128

# of drives (max) Up to 480 FC/SATA

Up to 480 FC/SATA

128FC, FATA

384FC, FATA

1024FC, FATA

1024FC, FATA

Cache (max) Up to 64GB Up to 64GB 4 GB 128 GB 256 GB 384 GB

RAID 1, 3, 5, 6, 10 1, 3, 5, 6, 10 5, 10 5, 6, 10 5, 6, 10 5, 6, 10

System Storage Managed

FlashCopy ** ** Yes Yes Yes Yes

Metro Mirror ** ** Yes Yes Yes Yes

Global Mirror ** ** Yes Yes Yes Yes

PowerHA Managed

Metro Mirror No No Yes Yes Yes Yes

Global Mirror No No Yes Yes Yes Yes

Geo Mirror Yes Yes Yes Yes Yes Yes

Logical Replication

iCluster & Other Yes Yes Yes Yes Yes Yes

Source: http://www-03.ibm.com/systems/storage/disk/product-compare.html

• DS5100, DS5300, DS8100, DS8300, and DS8700 are also supported via VIOS for POWER6 and BladeCenter H

• IBM is investigating supporting Copy Services with direct attach DS5100 and DS5300

Page 31: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

31

IBM Systems Storage Support for IBM iPowerVM VIOS Attach

DS3200 DS3400 DS4700 DS4800 DS5020 DS5100 DS5300 XIV

Systems BladeCenter S and H

POWER6BladeCenter H

POWER6BladeCenter H

POWER6BladeCenter H

POWER6BladeCenter H

POWER6BladeCenter H

POWER6BladeCenter H

POWER6BladeCenter H

VIOS Yes Yes Yes Yes Yes Optional Optional Yes

Ports (max) SAS - 4 Fibre - 4 Fibre - 8 Fibre -8 Fibre - 4 Fibre -8 Fibre -16 Fibre -24

# of drives (max) 48 SAS/SATA 48 SAS/SATA 112 FC/SATA

224 FC/SATA

112

FC/SATA

Up to 480 FC/SATA

Up to 480 FC/SATA

180SATA

Cache (max) 1 GB 1 GB 4 GB 16 GB 2 GB 8 GB 16 GB 120 GB

RAID 0, 1, 3, 5, 6, 10 0, 1, 3, 5, 6, 10 0, 1, 3, 5, 6, 10 0, 1, 3, 5, 10 0, 1, 3, 5, 6, 10 0, 1, 3, 5, 6, 10 0, 1, 3, 5, 6, 10 Mirrored

System Storage Managed

FlashCopy Yes Yes Yes Yes Yes Yes Yes Yes

Metro Mirror No No Yes Yes Yes Yes Yes Yes

Global Mirror No No Yes Yes Yes Yes Yes No

PowerHA Managed

Metro Mirror No No No No No No No No

Global Mirror No No No No No No No No

Geo Mirror Yes Yes Yes Yes Yes Yes Yes Yes

Logical Replication

iCluster & Other Yes Yes Yes Yes Yes Yes Yes Yes

Source: http://www-03.ibm.com/systems/storage/disk/product-compare.html

Page 32: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

32

IBM Systems Lab Services Virtualization Program

What is it?– Free presales technical assistance from Lab Services– Help with virtualization solutions:

Open storage (such as DS5100 and DS5300)Power bladesIBM Systems Director VMControlOther PowerVM technologies

– Design solution, hold Q&A session with client, verify hardware configuration– Does not cover implementation

Who can use it?– IBMers, Business Partners, clients

How do I use it?– Contact Lab Services for nomination form – [email protected]– Send in form– Participate in assessment call with Virtualization Program team– Work with dedicated Lab Services technical resource to design solution

before the sale

Page 33: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

33

Service Vouchers

• Getting-started implementation services • 8 hours of services with each voucher• Any additional services are billable• http://www.ibm.com/systems/i/hardware/editions/services.html

Page 34: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

34

Resources

IBM i Virtualization and Open Storage Read-me First: http://www.ibm.com/systems/i/os IBM i and storage performance information:

http://www.ibm.com/systems/i/advantages/perfmgmt/resource.html Service vouchers: http://www.ibm.com/systems/i/hardware/editions/services.html

Page 35: © 2009 IBM Corporation IBM i and Open Storage 4Q 2009 Update IBM Power Systems Mike Schambureck Vess Natchev IBM Lab Services, Rochester

© 2009 IBM Corporation

IBM Power Systems

35

8 IBM Corporation 1994-2007. All rights reserved.References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

Trademarks of International Business Machines Corporation in the United States, other countries, or both can be found on the World Wide Web at http://www.ibm.com/legal/copytrade.shtml.

Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce.ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.UNIX is a registered trademark of The Open Group in the United States and other countries.Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

The customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

Prices are suggested U.S. list prices and are subject to change without notice. Starting price may not include a hard drive, operating system or other features. Contact your IBM representative or Business Partner for the most current pricing in your geography.

Photographs shown may be engineering prototypes. Changes may be incorporated in production models.

Trademarks and Disclaimers