40
Steffen Thoss [email protected] Linux on zSeries Performance Update Session 9390

Linux on zSeries Performance Update

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Linux on zSeries Performance Update

Steffen [email protected]

Linux on zSeries Performance UpdateSession 9390

Page 2: Linux on zSeries Performance Update

TrademarksTrademarksThe following are trademarks of the International Business Machines Corporation in the United States and/or other countries.Enterprise Storage ServerESCON*FICONFICON ExpressHiperSocketsIBM*IBM logo*IBM eServerNetfinity*S/390*VM/ESA*WebSphere*z/VMzSeries* Registered trademarks of IBM CorporationThe following are trademarks or registered trademarks of other companies.Intel is a trademark of the Intel Corporation in the United States and other countries.Java and all Java-related trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc., in the United States and other countries.Lotus, Notes, and Domino are trademarks or registered trademarks of Lotus Development Corporation.Linux is a registered trademark of Linus Torvalds.Microsoft, Windows and Windows NT are registered trademarks of Microsoft Corporation.Penguin (Tux) compliments of Larry Ewing.SET and Secure Electronic Transaction are trademarks owned by SET Secure Electronic Transaction LLC.UNIX is a registered trademark of The Open Group in the United States and other countries.* All other products may be trademarks or registered trademarks of their respective companies.

I

Page 3: Linux on zSeries Performance Update

AgendaAgenda

Relative System CapacityzSeries HardwareScalabilityNetworkingContext SwitchesDisk I/O

Parallel Access Volume (PAV)ESS

Page 4: Linux on zSeries Performance Update

Relative System CapacityRelative System Capacity

A system provides different types of resourcesCapacity for each resource type may be differentThe ideal machine provides enough capacity of each typeDon't forget additional Resources (Network, Skilled staff, Money, availability of software, reliability, time ...)

CPU

Memory I/O

Applicationcache

Disk cache

BlockedI/O

CPU

I/OMemory

zSeries

The ideal platform requires a mix of resources in right quantity

Page 5: Linux on zSeries Performance Update

Resource ProfilesResource ProfilesEach application has its specific requirements

CPU intensiveI/O intensiveMemory

Applications can often be tuned to change the resource profileExchange one resource for the otherRequires knowledge about available resources

Some platforms can be extended better than othersNot every platform runs every application wellIt's not easy to determine the resource profile of an appl.

Application 1 Application 2 Application 3 Application 4

Page 6: Linux on zSeries Performance Update

zSeries HardwarezSeries Hardware

z800/z900

z990

Page 7: Linux on zSeries Performance Update

z900 System structure: z900 System structure: Optimized for maximum external bandwidthOptimized for maximum external bandwidth

20 PU Chips @ 1.3 / 1.09 ns 3 SAP's, 1 spare up to 16 CP's up to 8 ICF's/IFL's

M B A

STI STI STI STI STI STI

M B A

STI STI STI STI STI STI

M B A

STI STI STI STI STI STI

M B A

STI STI STI STI STI STI

6 x 1 GB/s

6 x 1 GB/s

Memory(up to 16 GB)

Level 2 Cache (16 MB)

Memory(up to 16 GB)

PU PU PU PU PU PU PU PU PU PU

Level 2 Cache (16 MB)

PU PU PU PU PU PU PU PU PU PU

Memory(up to 16 GB)

Memory(up to 16 GB)

6 x 1 GB/s

6 x 1 GB/s

Page 8: Linux on zSeries Performance Update

zSeries 2003: Extended Multi-Node(Book)-zSeries 2003: Extended Multi-Node(Book)-Structures:Structures:

From z900 ...

Level 2 Cache

Memory

PU PU PU PU PU PU PU PU PU PU

Level 2 Cache

Memory

PU PU PU PU PU PU PU PU PU PU

Level 2 Cache

Memory

PUPU

PUPU

PUPU

PUPU PU PU

PU PU

Level 2 Cache

Memory

PUPU

PUPU

PUPU

PUPU PU PU

PU PU

Level 2 Cache

Memory

PUPU

PUPU

PUPU

PUPU PU PU

PU PU

Level 2 Cache

Memory

PUPU

PUPU

PUPU

PUPU PU PU

PU PU

... to next generation modular zSeries systems with x-fold capacity

0.8x nsec CPU-CycleSuperscalar Design..50..60.. % more UP-Performance vs 2C1

Page 9: Linux on zSeries Performance Update

zSeries 2003: Multi-Book(Node)-Structures zSeries 2003: Multi-Book(Node)-Structures (logical view)(logical view)

A single pool of physical resources (CPU's, memory, I/O) in modular implementation (n=1/2/3/4 nodes/'books')Multiple Channel Subsystems (n x 256 CHPIDs)Exploitation through virtual servers: 15, 30, 60 (SOD) LPARs ...100+... (VM)

...... ......

Level 2 Cache

Memory

PU PU PU PU PU PUPU PU

I/O

Page 10: Linux on zSeries Performance Update

z990 – 4 New Modelsz990 – 4 New Models

Machine Type: 20844 Models:

A08 & B16 (GA1 = 6/16/2003) maximum 2 Books C24 & D32 (GA2 = 4Q2003) maximum 4 Books

Each Processor-Book has: 12 PUs8 PUs available for characterization as CPs, IFLs, ICFs or additional SAPs 2 PUs standard as SAPs2 PUs standard as spares

Memory:Up to 64 GB per BookSystem minimum of 8 GB8 GB increments

I/O:Each Book has up to 12 STIs @ 2 GB/sUp to 96 GB System bandwidth Up to 512 channels/CEC - dependent on Channel types

Page 11: Linux on zSeries Performance Update

Our Hardware for MeasurementsOur Hardware for Measurements

2105-F20 (Shark)384 MB NVS16 GB Cache 128 * 36 GB disks 10.000 RPMFCP (2 Gbps)FICON (1 Gbps)

8687-3RX (8-way X440)8-way Intel Pentium 3 Xeon 1.6 GHz8 * 512K L2 Cache (private)hyperthreadingsummit chipset

2084-B16 (z990)2 Books each with 8 PU64 GB OSA Express GbEFICONHipersocket z/VM 4.4.

2064-216 (z900)1.09ns (917MHz)2 * 16 MB L2 Cache (shared)64 GB FICONHiperSocketsOSA Express GbE

Page 12: Linux on zSeries Performance Update

SuSE SLES7 versus SuSE SLES8SuSE SLES7 versus SuSE SLES8

From Kernel version 2.4.7 / 2.4.17 to version 2.4.19From glibc version 2.2.4-31 to version 2.2.5-84From gcc version 2.95.3 to version 3.2-31Huge number of United Linux patches1.3 MLOC (including x,p,i changes)New Linux schedulerAsync I/OSLES8 SP2 available

Page 13: Linux on zSeries Performance Update

Scalability – Dbench File I/OScalability – Dbench File I/O

Emulation of Netbench benchmark, rates windows fileserversMixed file operations workload for each process: create, write, read, append, delete

Scaling for Linux with 1, 4, 8, 16 PUs Scaling for 1, 4, 8, 12, 16, 20, 26, 32, 40 clients (processes) simultaneously

Runs completely in main memory/buffercache in our scenario (2 GB main memory)

Page 14: Linux on zSeries Performance Update

ScalabilityScalability - z900 vs Intel, ext2, 31/32Bit- z900 vs Intel, ext2, 31/32Bit

1 4 8 12 16 20 26 32 400

100200300400500600700800900

100011001200

Dbench, LPAR,z900

1

4

8

16

no. of processes

Thro

ughp

ut [M

B/se

c]

1 2 4 8 12 16 20 26 32 400

100

200

300

400

500

600

700

800

900

1000

1100

1200Dbench, x440

1

4

8

16

no. of processes

Thro

ughp

ut [M

B/se

c]

# of CPUs # of CPUs

z900 shows good scaling behavior x440 shows best throughput with 4 CPU, strong throughput degradation

with more than 4 CPUs

Page 15: Linux on zSeries Performance Update

Scalability - z900 vs z990, ext2, 31 BitScalability - z900 vs z990, ext2, 31 Bit

1 4 8 12 16 20 26 32 400

100200300400500600700800900

100011001200

Dbench, LPAR, z900

1

4

8

16

no. of processes

Thro

ughp

ut [M

B/se

c]

1 4 8 12 16 20 26 32 400

100200300400500600700800900

100011001200

Dbench, LPAR, z990

1

4

8

16

no. of processes

Thro

ughp

ut [M

B/se

c]

# of CPUs# of CPUs

z990 takes advantage of higher memory bandwidth small improvement with 16 PUs because PUs are spread over 2 books

Page 16: Linux on zSeries Performance Update

NetworkingNetworking

IBM internal benchmark Netmark 2Based on netperfSimulates network traffic Adjustable parameters

runtimepacket sizenumber of connections...

Huge results file with much statistical informationNumbers measured on z900 and z990

Page 17: Linux on zSeries Performance Update

Hipersocket MTU 32k – LPAR Hipersocket MTU 32k – LPAR

strg_1 strg_10 strg_50 strp_1 strp_10 strp_500

200

400

600

800

1000

1200Stream workload

z900

z990

Thro

ughp

ut in

MB/

sec

crr64x8k_1 crr64x8k_10 crr64x8k_50

0

2500

5000

7500

10000

12500

15000

17500

20000CRR workload

z900

z990

Tran

sact

ions

per

sec

bette

rbe

tter

Page 18: Linux on zSeries Performance Update

GuestLAN type Hipersocket MTU 32k – z/VMGuestLAN type Hipersocket MTU 32k – z/VM

1 10 500

100020003000400050006000700080009000

10000110001200013000140001500016000

RR 200x32k workload

z900z990

# of connections

Tran

sact

ion

per s

ec

1cl 1sv 10cl 10sv 50cl 50sv

0

5

10

15

20

25

30

35

40

45

CPU load (q time) RR 200x32k workload

z900

z990

1cl = 1 connection client side (sv=server)

CPU

load

in % be

tter

bette

r

Page 19: Linux on zSeries Performance Update

Gigabit Ethernet MTU 1500 – z/VMGigabit Ethernet MTU 1500 – z/VM

strg_1 strg_10 strg_50 strp_1 strp_10 strp_500

10

20

30

40

50

60

70

80

90

Stream workload

z900

z990

Thro

ughp

ut in

MB/

sec

1 10 50 1 10 50

0

2500

5000

7500

10000

12500

15000

17500

20000

22500 RR workload

z900

z990

RR 1/1 RR 200/1000

Tran

sact

ions

per

sec

bette

rbe

tter

Page 20: Linux on zSeries Performance Update

Kernel – Context SwitchesKernel – Context Switches

2 4 8 16 24 32 64 96128192256

# processess

050

100150200250

mic

rose

cond

s

0K4K8K16K32K64K96K128K256K

Lmbench, LPAR, z900, 31-BITContext switch

2 4 8 16 24 32 64 96128192256

# processess

050

100150200250

mic

rose

cond

s

0K4K8K16K32K64K96K128K256K

Lmbench, LPAR, z990, 31-BITContext switch

2 4 8 16 24 32 64 96128192256

# processess

0200400600800

1000

mic

rose

cond

s

0K4K8K16K32K64K96K128K256K

LmBench, Intel x440 Context Switch Context Switches much faster on zSeries

because of large shared caches Be aware of different Y-axis scale

Page 21: Linux on zSeries Performance Update

Parallel Access Volume (PAV)Parallel Access Volume (PAV)A Lab experimentA Lab experiment

VM/ESA

Guest

Base Alias X

3390-9

5680 56B9 56BF.....

Linux cannot enable PAV on the ESS but can use it under VM

Page 22: Linux on zSeries Performance Update

Base and Aliases (PAV Cont.)Base and Aliases (PAV Cont.)IOCDS changes

IODEVICE ADDRESS=(5680,024),UNITADD=00,CUNUMBR=(5680), * STADET=Y,UNIT=3390BIODEVICE ADDRESS=(5698,040),UNITADD=18,CUNUMBR=(5680), * STADET=Y,UNIT=3390A

ATTACH Base and Aliases to the guestQUERY PAV shows base and alias addresses

cat /proc/dasd/devices

5794(ECKD) at ( 94: 0) is dasda : active at blocksize: 4096, 1803060 blocks, 7043 MB5593(ECKD) at ( 94: 4) is dasdb : active at blocksize: 4096, 601020 blocks, 2347 MB5680(ECKD) at ( 94: 8) is dasdc : active at blocksize: 4096, 1803060 blocks, 7043 MB56bf(ECKD) at ( 94: 12) is dasdd : active at blocksize: 4096, 1803060 blocks, 7043 MB

cat /proc/subchannels | egrep "5680|56BF"5680 0030 3390/0C 3990/E9 yes FC FC FF C6C7C8CA CBC9000056BF 0031 3390/0C 3990/E9 yes FC FC FF C6C7C8CA CBC90000

This works only with z/VM

Page 23: Linux on zSeries Performance Update

LVM commands (PAV Cont.)LVM commands (PAV Cont.)

vgscan: create configuration datapvcreate /dev/dasdc1vgcreate vg_kb /dev/dasdc1vgdisplay

Page 24: Linux on zSeries Performance Update

vgdisplay vgdisplay vgdisplay -v vg_kb--- Volume group ---VG Name vg_kbVG Access read/writeVG Status available/resizableVG # 0MAX LV 256Cur LV 0Open LV 0MAX LV Size 255.99 GBMax PV 256Cur PV 1Act PV 1VG Size 6.87 GBPE Size 4 MBTotal PE 1759Alloc PE / Size 0 / 0Free PE / Size 1759 / 6.87 GBVG UUID 3nwJYn-SxW1-gKym-OvZs-TYIf-CrHP-inO5Yp

--- No logical volumes defined in "vg_kb" ---

Page 25: Linux on zSeries Performance Update

More LVM commandsMore LVM commandslvcreate --name lv_kb --extents 1759 vg_kb

cat /proc/lvm/globalLVM module LVM version 1.0.5(mp-v6)(15/07/2002)Total: 1 VG 1 PV 1 LV (0 Lvs open)Global: 32300 bytes malloced IOP version: 10 3:18:35 activeVG: vg_kb [1 PV, 1 LV/0 open] PE Size: 4096 KB Usage [KB/PE]: 7204864 /1759 total 7204864 /1759 used 0 /0 free PV: [AA] dasdc1 7204864 /1759 7204864 /1759

0 /0 +-- dasdd1 LV: [AWDL ] lv_kb 7204864 /1759 close

lvscanlvscan -- ACTIVE "/dev/vg_kb/lv_kb" [6.87 GB]

lvscan -- 1 logical volumes with 6.87 GB total in 1 volume grouplvscan -- 1 active logical volumes

Page 26: Linux on zSeries Performance Update

Enable PathsEnable Pathspvpath -qaPhysical volume /dev/dasdc1 of vg_kb has 2 paths: Device Weight Failed Pending State # 0: 94:9 0 0 0 enabled # 1: 94:13 0 0 0 disabledThe second path can be enabled:pvpath -p1 -ey /dev/dasdc1vg_kb: setting state of path #1 of PV#1 to enabled

pvpath -qaPhysical volume /dev/dasdc1 of vg_kb has 2 paths: Device Weight Failed Pending State # 0: 94:9 0 0 0 enabled # 1: 94:13 0 0 0 enabledNow LVM is ready to use both paths to the volume

Page 27: Linux on zSeries Performance Update

ResultsResultsiozone sequential write/read 1 disk

Paths Write (MB/s) Read (MB/s)1 14.9 27.02 18.7 46.43 22.4 65.94 23.4 81.45 23.2 96.96 22.6 106.77 21.2 106.78 21.1 119.0

These are preliminary results in a controlled environment. PAV is not yet officially supported with Linux on zSeries!

Page 28: Linux on zSeries Performance Update

ESS – Disk I/O ESS – Disk I/O

Don't treat ESS as a black box, understand its structureThe default is close to worst case:You ask for 16 disks and your SysAdmin gives you addresses 5100-510FWhat's wrong with that?

Page 29: Linux on zSeries Performance Update

ESS ArchitectureESS Architecture

➢ CHPIDs

➢ Host Adapter (HA) supporting FCP (FCP port) -16 Host Adapters, organized in 4 bays, 4 ports each

➢ Device Adapter Pairs (DA) - each one supports two loops

➢ Disks are organized in ranks - each rank (8 physical disks) implements one RAID 5 array (with logical disks)

Let's have a deeper look to the elements of the scenario:

Page 30: Linux on zSeries Performance Update

ESS ArchitectureESS Architecture

➢ CHPIDs

➢ Host Adapter (HA) supporting FCP (FCP port) -16 Host Adapters, organized in 4 bays, 4 ports each

➢ Device Adapter Pairs (DA) - each one supports two loops

➢ Disks are organized in ranks - each rank (8 physical disks) implements one RAID 5 array (with logical disks)

Scenarios: single disk, single rank

Page 31: Linux on zSeries Performance Update

ESS ArchitectureESS Architecture

➢ CHPIDs

➢ Host Adapter (HA) supporting FCP (FCP port) -16 Host Adapters, organized in 4 bays, 4 ports each

➢ Device Adapter Pairs (DA) - each one supports two loops

➢ Disks are organized in ranks - each rank (8 physical disks) implements one RAID 5 array (with logical disks)

Scenario: single host adapter

Page 32: Linux on zSeries Performance Update

ESS ArchitectureESS Architecture

➢ CHPIDs

➢ Host Adapter (HA) supporting FCP (FCP port) -16 Host Adapters, organized in 4 bays, 4 ports each

➢ Device Adapter Pairs (DA) - each one supports two loops

➢ Disks are organized in ranks - each rank (8 physical disks) implements one RAID 5 array (with logical disks)

Scenario: single CHPID

Page 33: Linux on zSeries Performance Update

ESS ArchitectureESS Architecture

➢ CHPIDs

➢ Host Adapter (HA) supporting FCP (FCP port) -16 Host Adapters, organized in 4 bays, 4 ports each

➢ Device Adapter Pairs (DA) - each one supports two loops

➢ Disks are organized in ranks - each rank (8 physical disks) implements one RAID 5 array (with logical disks)

Scenario: two CHPIDs

Page 34: Linux on zSeries Performance Update

ESS ArchitectureESS Architecture

➢ CHPIDs

➢ Host Adapter (HA) supporting FCP (FCP port) -16 Host Adapters, organized in 4 bays, 4 ports each

➢ Device Adapter Pairs (DA) - each one supports two loops

➢ Disks are organized in ranks - each rank (8 physical disks) implements one RAID 5 array (with logical disks)

Scenario: four CHPIDs (4C4H4R ESS 2105)

Page 35: Linux on zSeries Performance Update

FCP MeasurementFCP MeasurementSummary of the Scenarios:

Benchmark used for measuring: Iozone (ht tp:/ /www.iozone.org)

mult i process sequential f i le system I/Oeach process wri tes and reads a 350 M B f i le on a separate diskSystem: LPAR, 4 CPUs, 128 M B main memory, Linux 2.4.17 with hz t imer off

scal ing was: 1, 2, 4, 8, 16 processesthe maximum throughput values were taken as resul t

used resourceslimiting resource

Scenario CHPIDs HA Ranks Disks

single Disk 1 1 1 1 1 host adapter

single Rank 1 1 1 8 1 host adapter

single Host Adapter 1 1 4 8 1 host adapter

single CHPID 1 4 4 16 1 CHPID

two CHPIDs 2 4 4 16 2 CHPIDs

4 4 4 16 4 host adaptersmaximum available = 4C4H4R ESS 2105

Page 36: Linux on zSeries Performance Update

Hardware SetupHardware Setup

2064-216, 917 Mhz, 256MB LPAR4 FICON Express channels used for FCP (IOCDS: type FCP)6 FICON Express Channels used for FICON (IOCDS: type FC)2109-F16 FCP switchESS 2105-F20:

16GB cache, 4 FCP host adapters, 6 FICON host adapters4 device adapter pairsonly A-loops contain disks (36.4 GB, 10,000 RPM):

- 4 ranks for FB (fixed block) disks used for FCP- 8 ranks ECKD disks used for FICON measurements

Page 37: Linux on zSeries Performance Update

Results – Maximum ThroughputResults – Maximum Throughput

s e q u e n t ia l w r i t e s e q u e n t ia l r e a d0

2 55 07 5

1 0 01 2 51 5 01 7 52 0 02 2 52 5 02 7 53 0 0

I o z o n e , M a x i m u m T h r o u g h p u t , 3 1 b i tS e q u e n t i a l f i l e s y s t e m I / O

s i n g l e d i s k s i n g l e r a n k s i n g l e H A s i n g l e C H P I D t w o C H P I D s 4 C 4 W 4 R 2 1 0 5 - F 2 0

MB

/s

- 1 HA limits to 40MB/s write and 65 MB/s read, regardless of the number of ranks- 4 HA are limiting to 125 MB/s write and 240 MB/s read,but 4 CHPIDs are required to make use of it- 31 bit and 64 bit difference is small- it is expected that the values further increase using more ranks, HA, CHPIDs

Page 38: Linux on zSeries Performance Update

General RulesGeneral Rules

this makes it slow:

➢ when all disks are from one rank and accessed via the same path

this makes it fast:➢ use many host adapters

➢ spread the host adapters used across all host adapter bays

➢ use as much CHPIDs as possible and access each disk through all CHPIDs, if possible (FICON, LVM1-mp)

➢ don't use more than two HAs per CHPID

➢ spread the disks used over all ranks equally

➢ avoid using the same resources (CHPID, host adapter, rank) as long as possible

this applies to FCP and FICON

Page 39: Linux on zSeries Performance Update

Visit us !Visit us !

Linux for zSeries Performance Website:http://www10.software.ibm.com/developerworks/opensource/linux390/whatsnew.shtml

Linux-VM Performance Website:http://www.vm.ibm.com/perf/tips/linuxper.html

Performance Redbook:SG24-6926-00

Page 40: Linux on zSeries Performance Update

QuestionsQuestions