38
Windows Scalability: Windows Scalability: Technology, Challenges and Technology, Challenges and Limitations Limitations Andreas Kampert Andreas Kampert Microsoft in the Microsoft in the Enterprise Enterprise

Windows Scalability: Technology, Challenges and Limitations Andreas Kampert Microsoft in the Enterprise

Embed Size (px)

Citation preview

Windows Scalability: Technology, Windows Scalability: Technology, Challenges and LimitationsChallenges and Limitations

Andreas KampertAndreas Kampert

Microsoft in the Microsoft in the EnterpriseEnterprise

AgendaAgenda

Scale-up and Scale-outScale-up and Scale-out Scale-UpScale-Up

CPU, Memory, DisksCPU, Memory, Disks What does this mean for Windows applicationsWhat does this mean for Windows applications

Scale-OutScale-Out ClonesClones PartitioningPartitioning

Scale-Up and Scale-Out togetherScale-Up and Scale-Out together Application example Sieble Enterprise ApplicationApplication example Sieble Enterprise Application

Scale Scale UPUP

Scalable SystemsScalable Systems

Scale UP:Scale UP: grow by grow by adding components adding components to a single systemto a single system

Scale OutScale Out: grow by : grow by adding more systemsadding more systems

Scale OUTScale OUT

Everything starts with understanding your Everything starts with understanding your computercomputer

Controller

PCI Bus PCI Bus 2PCI Bus 1

CPU 0 CPU 2 CPU 3CPU 1

Main Memory

Main MemorySystem Bus

Controller

Controller

Controller

AgendaAgenda

Scale-up and Scale-outScale-up and Scale-out Scale-UpScale-Up

CPU, Memory, DisksCPU, Memory, Disks What does this mean for Windows applicationsWhat does this mean for Windows applications

Scale-OutScale-Out ClonesClones PartitioningPartitioning

Scale-Up and Scale-Out togetherScale-Up and Scale-Out together Application example Sieble Enterprise ApplicationApplication example Sieble Enterprise Application

The Memory HierarchyThe Memory Hierarchy

Locality REALLY mattersLocality REALLY matters CPU 2 Ghz, RAM at 5 MhzCPU 2 Ghz, RAM at 5 Mhz

RAM is no longer random accessRAM is no longer random access Organizing the code gives 3x (or more)Organizing the code gives 3x (or more) Organizing the data gives 3x (or more)Organizing the data gives 3x (or more)

LevelLevel latencylatency (clocks)(clocks) RegistersRegisters 1 1 L1L1 2 2 L2L2 10 10 L3 L3 30 30 Near RAMNear RAM 100100 Far RAMFar RAM 300300

Application CodeGlobal Variables

.DLL code

00000000

7FFFFFFF

Exec, Kernel, HAL, drivers, per-thread

kernel mode stacks, Win32K.Sys

File system cachePaged pool

System PTEsNon-paged pool… FFFFFFFF

80000000

Process page tables,hyperspace

C0000000

Unique per process, accessible in user or kernel mode

System wide,accessible only in kernel mode

Per process, accessible only in kernel mode

32-bit Windows Virtual Address 32-bit Windows Virtual Address SpaceSpace

3 GB allows Extension

Requires:Boot.ini Settingplus large_address_aware

Memory MappingMemory Mapping

Process 1

Process 2

User AddressSpace

User AddressSpace

System AddressSpace

System AddressSpace

Physical Memory

Pagefile(s)

Virtual Memory

Physical Address Extension for Physical Address Extension for IA32IA32 PAE required, if using >4GB physical PAE required, if using >4GB physical

memorymemory Makes additional memory available to Makes additional memory available to

the OSthe OS Has no impact to applicationsHas no impact to applications Applications require AWE (see later)Applications require AWE (see later)

Enabling PAEEnabling PAE[boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINNT [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINNT= “Windows PAE" /PAE

Address Windowing Extension Address Windowing Extension API’sAPI’s Allows Applications to bypass the 4 GB Allows Applications to bypass the 4 GB

limitlimit Advantages of the AWE API’sAdvantages of the AWE API’s Small API Set utilizing a windowing Small API Set utilizing a windowing

techniquetechnique VirtualAlloc() with the MEM_PHYSICAL VirtualAlloc() with the MEM_PHYSICAL

FLAGFLAG AllocateUserPhysicalPages()AllocateUserPhysicalPages() MapUserPhysicalPages()MapUserPhysicalPages() FreeUserPhysicalPages()FreeUserPhysicalPages()

AWE MechanismAWE Mechanism

2 GB (or 3) GBApplication

Memory Space

AWE RegionAllocated Using VirtualAlloc()

AllocateUserPhysicalPages()

MapUserPhysicalPages()

ApplicationVirtual Address Space

Physical Memory

AllocateUserPhysicalPages()

Hot-Add MemoryHot-Add Memory

Requires Requires Hardware andHardware and BIOS supportBIOS support

SRAT SRAT ACPI 2.0 ACPI 2.0 Reporting Memory at PostReporting Memory at Post

Memory and CPU LimitsMemory and CPU LimitsGeneral Memory LimitsGeneral Memory Limits 32-bit32-bit 64-bit64-bit

Total Virtual Address SpaceTotal Virtual Address Space 4 GB 4 GB 16 TB16 TB

Virtual Address Space per 32-bit Virtual Address Space per 32-bit processprocess

2GB (3 GB if system 2GB (3 GB if system

is booted with /3gb switch)is booted with /3gb switch)

4GB if compiled with 4GB if compiled with /LARGEADDRESSAWA/LARGEADDRESSAWA

RE 2GB otherwiseRE 2GB otherwise

Virtual Address Space per 64-bit Virtual Address Space per 64-bit processprocess

Not applicableNot applicable 8 TB8 TB

Paged PoolPaged Pool 470 MB470 MB 128 GB128 GB

Non-Paged PoolNon-Paged Pool 256 MB256 MB 128 GB128 GB

System PTESystem PTE 660 MB – 900MB660 MB – 900MB 128 GB128 GB

Physical Memory and CPU Physical Memory and CPU LimitsLimits11

32-bit32-bit 64-bit64-bit

Windows XP ProfessionalWindows XP Professional 4 GB / 1-2 CPUs4 GB / 1-2 CPUs 32 GB / 1-2 CPUs32 GB / 1-2 CPUs

Windows Server 2003 Standard Windows Server 2003 Standard EditionEdition

4 GB / 1-4 CPUs4 GB / 1-4 CPUs 32 GB / 1-4 CPUs32 GB / 1-4 CPUs

Windows Server 2003 Enterprise Windows Server 2003 Enterprise EditionEdition

64 GB / 1-8 CPUs64 GB / 1-8 CPUs 1 TB / 1-8 CPUs1 TB / 1-8 CPUs

Windows Server 2003 Datacenter Windows Server 2003 Datacenter EditionEdition

64 GB / 1-32 CPUs64 GB / 1-32 CPUs 1 TB / 1-64 CPUs1 TB / 1-64 CPUs

Thread SchedulingThread Scheduling

Priority driven, preemptivePriority driven, preemptive No attempt to share processor's No attempt to share processor's

“fairly” among processes, only “fairly” among processes, only among threadsamong threads

Event-driven; no guaranteed Event-driven; no guaranteed execution period before preemptionexecution period before preemption

Time-sliced, round-robin within a Time-sliced, round-robin within a priority levelpriority level

Simultaneous thread execution Simultaneous thread execution on MP systemson MP systems Any processor can interrupt another Any processor can interrupt another

processor to schedule a threadprocessor to schedule a thread Tries to keep threads on same CPU Tries to keep threads on same CPU

(“ideal processor”)(“ideal processor”)

31

16

0

i

15

1

AffinityAffinity

Threads can run on any CPU, unless affinity Threads can run on any CPU, unless affinity specified otherwisespecified otherwise Affinity specified by a bit maskAffinity specified by a bit mask Each bit corresponds to a CPU numberEach bit corresponds to a CPU number

Thread affinity mask must be subset of Thread affinity mask must be subset of process affinity mask, which in turn must be a process affinity mask, which in turn must be a subset of the active processor masksubset of the active processor mask

““Hard Affinity” can lead to threads’ getting Hard Affinity” can lead to threads’ getting less CPU time than they normally wouldless CPU time than they normally would More applicable to large MP systems running More applicable to large MP systems running

dedicated server appsdedicated server apps

Disks Are Becoming TapesDisks Are Becoming Tapes

Capacity:Capacity: 150 GB, 150 GB,

300 GB, 300 GB, 2 TB 2 TB

Bandwidth:Bandwidth: 40 MBps 40 MBps

150 MBps150 MBps

Read time Read time 2 hours sequential, 2 days random 2 hours sequential, 2 days random

4 hours sequential, 12 days random4 hours sequential, 12 days random

150 IO/s 40 MBps150 IO/s 40 MBps

150 GB150 GB

200 IO/s 150 MBps200 IO/s 150 MBps

1 TB1 TB

Amdahl’s Balanced System LawsAmdahl’s Balanced System Laws 1 mips needs 1 MB ram and needs 20 IO/s 1 mips needs 1 MB ram and needs 20 IO/s At 1 billion instructions per secondAt 1 billion instructions per second

need 4 GB/cpuneed 4 GB/cpuneed 50 disks/cpu!need 50 disks/cpu!

64 cpus … 3,000 disks64 cpus … 3,000 disks

1 bips1 bipscpucpu4 GB4 GB

RAMRAM 50 disks50 disks10,000 IOps10,000 IOps

75 TB75 TB

Exchange Server Memory Exchange Server Memory ManagementManagement Exchange Server does not use memory beyond 4GB Exchange Server does not use memory beyond 4GB

efficientlyefficiently Exchange Server 2003 requires /3GB with more than Exchange Server 2003 requires /3GB with more than

1GB RAM1GB RAM Exchange Server 2003 has no advantage through the Exchange Server 2003 has no advantage through the

usage of PAEusage of PAE AWE not used by Exchange ServerAWE not used by Exchange Server

MSExchangeIS\VM Largest Block Size MSExchangeIS\VM Largest Block Size

MSExchangeIS\VM Total 16MB Free Blocks MSExchangeIS\VM Total 16MB Free Blocks

MSExchangeIS\VM Total Free Blocks MSExchangeIS\VM Total Free Blocks

MSExchangeIS\VM Total Large Free Block BytesMSExchangeIS\VM Total Large Free Block Bytes

Exchange Server ProcessorsExchange Server Processors

Exchange Server Mailbox Server scales Exchange Server Mailbox Server scales well up to 8 Processorswell up to 8 Processors

With more than 8 processors mostly With more than 8 processors mostly hardware partitioning is recommendedhardware partitioning is recommended

With more than 8 processors use With more than 8 processors use affinity mask to reduce to 8 processors affinity mask to reduce to 8 processors for Exchange Server 2003for Exchange Server 2003

Eventually additional processors for Eventually additional processors for Virus Scanner, etcVirus Scanner, etc

SQL Server Memory SQL Server Memory ManagementManagement SQL Server 32-bit supports up to 64 GBSQL Server 32-bit supports up to 64 GB

Usage of more than 4 GB requires fixed memoryUsage of more than 4 GB requires fixed memory Dynamic memory management is no longer possibleDynamic memory management is no longer possible Access time not linear!!!!Access time not linear!!!!

Use 64-bit SQL ServerUse 64-bit SQL Server Same issues with other DBMSSame issues with other DBMS

4GB 16 GB 64 GB

PAE Y3GB oAWE Y

PAE Y3GB NAWE Y

PAE N3GB oAWE o

Understand what the CPU does Understand what the CPU does for SQL Serverfor SQL Server

Win ThreadNetwork Handler

UMS UMS WorkWork

QueueQueue

Win NT Win NT Thread 0Thread 0

UMSUMSWorkWork

QueueQueue

UMSUMSWorkWork

QueueQueue

UMSUMSWorkWork

QueueQueue

Network Handler Notified Network Handler Notified When I/O CompletesWhen I/O Completes

UMS UMS Schedules Schedules

FibersFibers

NetworkNetwork

Fibers Write Fibers Write Directly to Directly to

ClientsClients

NetworkNetwork

NT Queues Reads NT Queues Reads Issued by Fibers to Issued by Fibers to

I/O Completion I/O Completion PortPort

CPU nCPU nCPU 1CPU 1 CPU 2CPU 2CPU 0CPU 0

Win NT Win NT Thread 1Thread 1

Win NT Win NT Thread 2Thread 2

Win NT Win NT Thread nThread n

FibersFibersFibersFibers

NT I/O Completion

Port

Terminal ServerTerminal ServerHistoric Issues with ScalabilityHistoric Issues with Scalability 32-bit systems32-bit systems

Servers often run out of kernel virtual memory Servers often run out of kernel virtual memory rather than CPU rather than CPU All applications must share the same 2 GB kernel All applications must share the same 2 GB kernel

address spaceaddress space Adding RAM does not helpAdding RAM does not help

Most customers run 1Proc and 2Proc serversMost customers run 1Proc and 2Proc servers Administrators must deploy and manage many serversAdministrators must deploy and manage many servers Reduces effectiveness of server consolidationReduces effectiveness of server consolidation

IA64 systemsIA64 systems Cannot run 32-bit applications without high Cannot run 32-bit applications without high

overhead of WOW emulationoverhead of WOW emulation Incremental users/server outweighed by costIncremental users/server outweighed by cost

x64 Editions x64 Editions Key valueKey value

Core OS functionality & Core OS functionality & performance benefits (64-bit)performance benefits (64-bit)

Runs most existing 32-bit apps Runs most existing 32-bit apps with increased performancewith increased performance

Provides evolutionary path to Provides evolutionary path to 64-bit applications64-bit applications

Single code-base based on Single code-base based on WS03 SP1WS03 SP1 AMD Opteron/Athlon 64 & Intel AMD Opteron/Athlon 64 & Intel

Xeon EM64T supported with one Xeon EM64T supported with one productproduct

CompatibilityCompatibility WS03 SP1 level compatibilityWS03 SP1 level compatibility Application kernel mode code Application kernel mode code

and drivers must be 64-bit and drivers must be 64-bit

WorkloadWorkload Performance and Performance and ScaleScale

32-bit Database32-bit Database up 17%up 17%

32-bit Business 32-bit Business AppsApps

SAP 10% more SAP 10% more usersusers

NetworkingNetworkingRecord 7Gbit/sec Record 7Gbit/sec

xferxfer

FileFile111% higher user 111% higher user

capacitycapacity

Active DirectoryActive Directory 2x higher throughput2x higher throughput

Terminal Terminal ServicesServices

50% more Users50% more Users

“First mover” Workloads:Preliminary Testing

AgendaAgenda

Scale-up and Scale-outScale-up and Scale-out Scale-UpScale-Up

CPU, Memory, DisksCPU, Memory, Disks What does this mean for Windows applicationsWhat does this mean for Windows applications

Scale-OutScale-Out ClonesClones PartitioningPartitioning

Scale-Up and Scale-Out togetherScale-Up and Scale-Out together Application example Sieble Enterprise ApplicationApplication example Sieble Enterprise Application

Clones:Clones: Availability+Scalability Availability+Scalability

Some applications areSome applications are Read-mostly Read-mostly Low consistency requirementsLow consistency requirements Modest storage requirement (less than 1TB)Modest storage requirement (less than 1TB)

Examples:Examples: HTML web servers HTML web servers LDAP servers LDAP servers

Replicate app at all nodes (clones)Replicate app at all nodes (clones) Load Balance:Load Balance:

Spray& Sieve: requests across nodesSpray& Sieve: requests across nodes Route: requests across nodesRoute: requests across nodes

Grow: adding clonesGrow: adding clones Fault tolerance: stop sending to that cloneFault tolerance: stop sending to that clone

Partitions For ScalabilityPartitions For Scalability

Clones are not appropriate for some apps.Clones are not appropriate for some apps. State-full apps do not replicate wellState-full apps do not replicate well high update rates do not replicate well high update rates do not replicate well

ExamplesExamples EmailEmail DatabasesDatabases Read/write file server…Read/write file server… Cache managersCache managers chat chat

Partition state among servers Partition state among servers Partitioning:Partitioning:

must be transparent to client.must be transparent to client. split & merge partitions onlinesplit & merge partitions online

AgendaAgenda

Scale-up and Scale-outScale-up and Scale-out Scale-UpScale-Up

CPU, Memory, DisksCPU, Memory, Disks What does this mean for Windows applicationsWhat does this mean for Windows applications

Scale-OutScale-Out ClonesClones PartitioningPartitioning

Scale-Up and Scale-Out togetherScale-Up and Scale-Out together Application example Sieble Enterprise ApplicationApplication example Sieble Enterprise Application

Siebel 7 EnvironmentSiebel 7 EnvironmentServer

ManagerGUI

WebClient

WirelessClient

MobileWeb

Client

HandheldClient

WirelessGatewayServer

Dedicated Web

Client

Siebel FileSystem Siebel

Database

MobileDB

Server Manager Cmd LineInterface

Siebel Web ServerExtension

SiebelServer

Siebel Gateway Server

Connection Broker Name Server

SiebelServer

SiebelServer

Web Server

Siebel Enterprise Server

SQLCE

EAI&

Data Loading

QuestionsQuestions ??

© 2004 Microsoft Corporation. All rights reserved.© 2004 Microsoft Corporation. All rights reserved.This presentation is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY.This presentation is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS OR IMPLIED, IN THIS SUMMARY.

Memory Latency And CPU Memory Latency And CPU CachesCaches

CPUs are much faster than memory,CPUs are much faster than memory,gap continues to growgap continues to grow(100Mhz -> 2+Ghz vs. 80ns -> 50ns)(100Mhz -> 2+Ghz vs. 80ns -> 50ns)

Caches needed to hide memory latencyCaches needed to hide memory latency Cache effectiveness depends onCache effectiveness depends on

locality of memory referenceslocality of memory references(e.g. cached data & code must be reused >9x before (e.g. cached data & code must be reused >9x before being pushed out)being pushed out)

““cacheline” = 32, 64, ... bytescacheline” = 32, 64, ... bytes(unit of replacement & collision)(unit of replacement & collision)

Effect Of Cache Hit RatioEffect Of Cache Hit RatioOn PerformanceOn Performance

1 / ( (FastTime * HitRatio) 1 / ( (FastTime * HitRatio) + (SlowTime * (1- + (SlowTime * (1-HitRatio) ) )HitRatio) ) )

Fast: 7 cycles for L2 hitFast: 7 cycles for L2 hit

Slow: 150 cycles for RAM accessSlow: 150 cycles for RAM access

Actual effect depends on memory Actual effect depends on memory accesses per instructionaccesses per instruction

0

0.2

0.4

0.6

0.8

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Cache Hit Ratio

Rel

ativ

e Pe

rform

ance

Disks Are Becoming TapesDisks Are Becoming TapesConsequencesConsequences

Use most disk capacity for archivingUse most disk capacity for archivingCopy on Write (COW) file system Copy on Write (COW) file system in Windows Server 2003in Windows Server 2003

RAID10 saves arms, costs space (OK!).RAID10 saves arms, costs space (OK!). Backup to diskBackup to disk

Pretend it is a 100GB disk + 1 TB diskPretend it is a 100GB disk + 1 TB disk Keep hot 10% of data on fastest part of diskKeep hot 10% of data on fastest part of disk Keep cold 90% on colder part of diskKeep cold 90% on colder part of disk

Organize computations to read/write Organize computations to read/write disks sequentially in large blocksdisks sequentially in large blocks

12,000 User Benchmark on 12,000 User Benchmark on HP/Windows/SQL64 HP/Windows/SQL64

Concurrent UsersConcurrent Users

Server Component ThroughputServer Component Throughput

WorkloadBusiness

Transactions Throughput / Hour

Projected Transactions

8 Hour Day

Assignment Manager 62,012 496,096

EAI - HTTP Adapter 496,056 3,968,448

EAI - MQ Series Adapter 294,539 2,356,312

Workflow Manager 116,944 935,552

WorkloadNumber of Users

Avg Operation Response Time to Load Runner (sec)

Business Transactions

Throughput / Hour

Projected Transactions

8 Hour Day

Sales / Service Call Center 8,400 0.137 43,662 349,300

eChannel (PRM) 1,200 0.131 16,130 129,037

eSales 1,200 0.144 8,164 65,313

eService 1,200 0.162 15,462 123,694

Totals 12,000 N/A 83,418 667,344

SQL64 on a 4x 1.5 GHz Itanium2 HP Integrity

used 47% CPU and 13.3 GB memory proving

unprecedented price/performance for

Siebel

12,000 User Benchmark on 12,000 User Benchmark on HP/Windows/SQL64 – resource HP/Windows/SQL64 – resource utilizationutilization

Niode Functional UseAverage CPU (%)

UtilizationAverage Memory

Utilization (MB)

4 x Proliant DL760 Web Server – Application Requests 8% 600

3 x Proliant BL20p Web Server – Application Requests 7% 500

1 x Proliant DL760 Web Server – HTTP Adapter, WF 9% 400

1 x Proliant 6400R Siebel Gatew ay Server 3% 200

4 x Proliant DL580 Siebel Application Server – End Users 13% 5,000

8 x Proliant BL40p Siebel Application Server – End Users 11% 4,700

1 x Proliant DL580 Siebel Application Server – EAI HTTP Adapter+ WF 25% 2,200

1 x Proliant DL760 Siebel Application Server – EAI-MSMQ Adapter 21% 916

1 x Proliant BL20p Siebel Application Server – AM 3% 80

1 x Integrity rx5670 Microsoft SQL Server 2000 (64-bit) 47% 13,300

AIX & DB2W2K & SQL2K

HP-UX AIX & DB2W2K & SQL2K

HP-UX AIX & DB2W2K & SQL2K

HP-UX

Sales / Service Call Center 20,000 20,000 22,400 0.148 0.295 0.116 122,041 121,425 116,571

eChannel (PRM) 4,000 4,000 3,200 0.182 0.185 0.212 27,615 27,619 42,890

eSales 3,000 3,000 3,200 0.233 0.207 0.242 17,134 17,157 21,703

eService 3,000 3,000 3,200 0.196 0.147 0.228 40,455 40,521 41,148

Totals 30,000 30,000 32,000 - - - 207,245 206,722 222,312

Workload(User Type)

Number of Users Avg Operation Response Time to

Load Runner (sec) Business Transactions

Throughput / Hour

Siebel Scalability On Available Siebel Scalability On Available PlatformsPlatforms

AIX & DB2W2K & SQL2K

HP-UX

Assignment Manager 38,599 37,693 22,817

EAI - HTTP Adapter 746,676 854,557 770,905

EAI - MQ Series Adapter 545,472 728,745 540,845

Workflow Manager 96,299 97,585 60,244

Workload(Background Processing)

Business Transactions Throughput / Hour

Metric AIX & DB2W2K & SQL2K

DB Growth (Proj GB/Month) 250.00 290.00 N/A

DB Memory Used (GB) 18.10 25.40 31.10

Database Connections 1,818 3,302 1,661

Web Server kbps per user 6.50 0.54 4.50

Note: 30,000 user tests are based on Siebel 7.0.3 and 32,000 test is based on 7.5.2; transaction mix is different between

Siebel 7,0.3 and 7.5.2 test suites.

Resource Utilization by 30,000 and 32,000 Resource Utilization by 30,000 and 32,000 Concurrent Users TestConcurrent Users Test

CPU Mem

3 x IBM p660 6H1 /w 6xRS64-IV 668MHz & 16GB RAM 84% 0.74 GB

1 x IBM p660 6H1 /w 6xRS64-IV 668MHz & 16GB RAM 11% 0.59 GB

5 x IBM p680 /w 24xRS64-IV 600MHz & 64GB RAM 70% 14.82 GB

2 x IBM p660 6H1 /w 6xRS64-IV 668MHz & 16GB RAM 85% 0.54 GB

1 x IBM p690 /w 32xRS64-IV 1.3GHz & 128GB RAM 23% 18.10 GB

Siebel Servers (Object Managers)

Siebel Servers (AM / EAI / WF)

Database Server - IBM DB2 v7.2

Node30,000 Users

Web Servers (User Requests)

Web Servers (EAI HTTP Requests)

CPU Mem

8 x Unisys ES2041 /w 4 x PIII 700MHz & 4GB RAM 56% 0.184 GB

1 x Unisys ES2041 /w 4 x PIII 700MHz & 4GB RAM 39% 0.035 GB

35 x Unisys ES2041 /w 4 x PIII 700MHz & 4GB RAM 48% 3.132 GB

2 x Unisys ES2041 /w 4 x PIII 700MHz (AM / WF) 19% 1.805 GB

2 x Unisys ES2081 /w 8 x PIII 700MHz (EAI HTTP/MQ Series)57% 0.810 GB

1 x Unisys ES7000 Orion 130 /w 16 x Itanium 2 1GHz & 64 GB RAM67% 25.74 GB

Database Server - MS SQL Server 2000 64-bit

Siebel Servers (Object Managers)

Siebel Servers (AM / EAI / WF)

Node30,000 Users

Web Servers (User Requests)

Web Servers (EAI HTTP Requests)

CPU Mem

5 x HP rp5470 /w 4 750 MHZ & 16GB RAM 46% 1.054 GB

5 x HP rp5470 /w 4 875 MHZ & 16GB RAM 37% 0.816 GB

1 x HP rp2470 /w 2 750 MHZ & 8GB RAM 70% 0.070 GB

Siebel Servers (Object Managers)

4 x HP rp8400 /w 16 x 875MHz & 64 GB RAM 82% 17.55 GB

1 x HP Superdome /w 32 x 875MHz & 128GB RAM 81% 34.70 GB

Siebel Servers (AM / EAI / WF)

1 x HP rp8400 /w 16 x 875MHz & 64GB RAM 82% 2.90 GB

Database Server - Oracle 9.2.0.2

1 x HP Superdome /w 16 x 875MHz & 64GB RAM 62% 31.10 GB

Node32,000 Users

Web Servers (User Requests)

Web Servers (EAI HTTP Requests)

X64 Performance and BenefitsX64 Performance and Benefits Lab testing indicates increased performanceLab testing indicates increased performance

Up to 50% improvement in users/server on comparable Up to 50% improvement in users/server on comparable hardwarehardware

Knowledge worker simulationKnowledge worker simulation

Largest benefit will be with 4P servers in limited virtual Largest benefit will be with 4P servers in limited virtual kernel memory scenarioskernel memory scenarios Opportunity for server consolidationOpportunity for server consolidation

Knowledge WorkerKnowledge Worker00

200200

400400

600600

Terminal Server PerformanceTerminal Server Performance

Windows 2000 Windows 2000

Windows Server 2003 (32-bit)Windows Server 2003 (32-bit)

80%80%

50%50%

(Hardware: 4P AMD 64 – HP DL 585)(Hardware: 4P AMD 64 – HP DL 585)

Windows Server 2003 Windows Server 2003 x64 x64

Registry Setting to Reduce Registry Setting to Reduce MicrosoftMicrosoft®® Outlook Outlook®® 2003 2003 Periodic PollingPeriodic Polling HKEY_CURRENT_USER\HKEY_CURRENT_USER\

Software\Microsoft\Office\11.0\Software\Microsoft\Office\11.0\Outlook\RPCOutlook\RPC

ConnManagerPoll [dword] ConnManagerPoll [dword] 0x6000x600

Registry Setting to Reduce Registry Setting to Reduce MicrosoftMicrosoft®® Outlook Outlook®® 2003 2003 Periodic PollingPeriodic Polling HKEY_CURRENT_USER\HKEY_CURRENT_USER\

Software\Microsoft\Office\11.0\Software\Microsoft\Office\11.0\Outlook\RPCOutlook\RPC

ConnManagerPoll [dword] ConnManagerPoll [dword] 0x6000x600