Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Virtual Machine/Enterprise Systems Architecture
Version 2 Release 1.0 Performance Report
Copyright International Business Machines Corporation 1995. All rights reserved.Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication or disclosure issubject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Programming Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Referenced Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Summary of Key Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Changes That Affect Performance . . . . . . . . . . . . . . . . . . . . . . . . . . 8Performance Improvements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Performance Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Measurement Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Format Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Tools Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Migration from VM/ESA 1.2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25CMS-Intensive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
9121-742 / Minidisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279121-480 / Minidisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319121-480 / SFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379221-170 / Minidisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
VSE/ESA Guest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479121-320 / DYNAPACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479121-480 / DYNAPACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559121-320 / VSECICS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
VMSES/E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Migration from Other VM Releases . . . . . . . . . . . . . . . . . . . . . . . . . . 74
New Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82POSIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83DCE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96GCS TSLICE Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Additional Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115VM/ESA on the Server 500 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116RAMAC Array Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120CMS Virtual Machine Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125370 Accommodation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131Storage Constrained VSE Guest using MDC . . . . . . . . . . . . . . . . . . . 135
Copyright IBM Corp. 1995 iii
RSCS 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140DirMaint 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145VTAM 4.2.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Appendix A. Workloads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157CMS-Intensive (FS8F) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157VSE Guest (PACE) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168VSE Guest (VSECICS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Appendix B. Configuration Details . . . . . . . . . . . . . . . . . . . . . . . . . 172
Appendix C. Master Table of Contents . . . . . . . . . . . . . . . . . . . . . . 174
Glossary of Performance Terms . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
iv VM/ESA 2.1.0 Performance Report
Notices
The information contained in this document has not been submitted to anyformal IBM test and is distributed on an as is basis without any warranty eitherexpressed or implied. The use of this information or the implementation of anyof these techniques is a customer responsibility and depends on the customer ′ sability to evaluate and integrate them into the customer ′ s operationalenvironment. While each item may have been reviewed by IBM for accuracy ina specific situation, there is no guarantee that the same or similar results will beobtained elsewhere. Customers attempting to adapt these techniques to theirown environments do so at their own risk.
Performance data contained in this document were determined in variouscontrolled laboratory environments and are for reference purposes only.Customers should not adapt these performance numbers to their ownenvironments as system performance standards. The results that may beobtained in other operating environments may vary significantly. Users of thisdocument should verify the applicable data for their specific environment.
References in this publication to IBM products, programs, or services do notimply that IBM intends to make these available in all countries in which IBMoperates. Any reference to an IBM licensed program in this publication is notintended to state or imply that only IBM ′ s program may be used. Anyfunctionally equivalent product, program, or service that does not infringe any ofthe intellectual property rights of IBM may be used instead of the IBM product,program, or service. The evaluation and verification of operation in conjunctionwith other products, except those expressly designated by IBM, are theresponsibility of the user.
IBM may have patents or pending patent applications covering subject matter inthis document. The furnishing of this document does not give you license tothese patents. You can send inquiries, in writing, to the IBM Director ofLicensing, IBM Corporation, 208 Harbor Drive, Stamford, CT, 06904-2501 USA.
Programming InformationThis publication is intended to help the customer understand the performance ofVM/ESA 2.1.0 on various IBM processors. The information in this publication isnot intended as the specification of any programming interfaces that areprovided by VM/ESA 2.1.0. See the IBM Programming Announcement forVM/ESA 2.1.0 for more information about what publications are considered to beproduct documentation.
Copyright IBM Corp. 1995 1
TrademarksThe following terms, denoted by an asterisk (*) in this publication, aretrademarks of the IBM Corporation in the United States or other countries orboth:
AIXACF/VTAMCICSCICS/VSEECKDEnterprise System/9000ES/9000IBMOfficeVisionOpenEditionOS/2PR/SMProcessor Resource/Systems ManagerRACFRAMACRS/6000SAAStreamerSystem/390S/390Virtual Machine/Enterprise Systems ArchitectureVM/ESAVM/XAVSE/ESAVTAM3090
The following terms, denoted by a double asterisk (**) in this publication, aretrademarks of other companies:
EXPLORE Legent Corporation
2 VM/ESA 2.1.0 Performance Report
Acknowledgements
The following people contributed to this report:
Glendale Programming Laboratory
Bill BitnerWes ErnsbergerGreg GasperBill GuziorLarry HartleyGary HineRaine JuneGreg KudamikTom Wright
Washington Systems Center
Marty HoranChuck Morse
PC Server S/390 Competency Center in Atlanta
Gary Eheman
Copyright IBM Corp. 1995 3
Abstract
The VM/ESA Version 2 Release 1.0 Performance Report summarizes theperformance evaluation of VM/ESA 2.1.0. Measurements were obtained for theCMS-intensive, VSE guest, and VMSES/E environments on various ES/9000processors.
This report provides performance and tuning information based on the results ofthe VM/ESA 2.1.0 performance evaluations conducted by the GlendaleProgramming Laboratory.
Discussion concentrates on the performance changes in VM/ESA 2.1.0, theperformance effects of migrating from VM/ESA 1.2.2 to VM/ESA 2.1.0, and theperformance of new functions provided in VM/ESA 2.1.0. A number of additionalevaluations are also included.
4 Copyright IBM Corp. 1995
Referenced Publications
The following publications are referred to in this report.
• VM/ESA: Performance, SC24-5782
• VM/ESA: CMS File Pool Planning, Administration, and Operation, SC24-5751
• VM/ESA: CP Diagnosis Reference, LY24-5256
• PC Server 500 System/390 Performance, WSC Flash 9522, G023542 orPCSVR390 PACKAGE on MKTTOOLS
• IBM PC Server 500 S/390 ...Is it right for you?, GK20-2763 or PCSVR390PACKAGE on MKTTOOLS
• VSE/ESA 2.1.0 Performance Considerations, VE21PERF PACKAGE on IBMVSE
• VSE/ESA 2.1 Turbo Dispatcher Performance, VE21PERF PACKAGE on IBMVSE
• IBM RAMAC Array Family, GG24-2509
• Using the IBM RAMAC Array DASD in an MVS, VM, or VSE Environment,GC26-7013
• TCP/IP Version 2 Release 2 for VM: Planning and Customization, SC31-6082
The following publications are performance reports for earlier VM/ESA releases.
• VM/ESA Release 1.0 Performance Report, ZZ05-04691
• VM/ESA Release 1.1 Performance Report, GG66-3236
• VM/ESA Release 2 Performance Report, GG66-3245
• VM/ESA Release 2.1 Performance Report, GC24-5673-00
• VM/ESA Release 2.2 Performance Report, GC24-5673-01
1 This report is classified as IBM Internal Use Only. The information contained in this document may be discussed withcustomers, but the document may not be given to customers. They may ask their IBM representative for access to theinformation contained in this publication.
Copyright IBM Corp. 1995 5
Summary of Key Findings
Summary of Key Findings
This report summarizes the performance evaluation of VirtualMachine/Enterprise Systems Architecture* (VM/ESA*) Version 2 Release 1.0.Measurements were obtained for the CMS-intensive, VSE guest, and VMSES/Eenvironments on various Enterprise System/9000* (ES/9000*) processors. Thissection summarizes the key findings. For further information on any given topic,refer to the page indicated in parentheses.
Performance Changes: VM/ESA 2.1.0 includes a number of performanceenhancements (page 9). Some changes have the potential to adversely affectperformance, especially in storage constrained CMS environments (page 16).Lastly, a number of changes were made that benefit VM/ESA performancemanagement (page 17).
Migration from VM/ESA 1.2.2: Benchmark measurements show the followingperformance results for VM/ESA 2.1.0 relative to VM/ESA 1.2.2:
CMS-intensive Performance has been improved significantly. Benchmarkresults show internal throughput rate (ITR) improvements of3.8% to 7.0% and external response time improvements of 7%to 20% (page 25).
The use of compiled REXX by CMS is the key factor that resultedin these improvements. Measurement results indicate thatsystems that already compile the S-disk REXX execs and XEDITmacros may experience a slight decrease in performance whenmigrating from VM/ESA 1.2.2 to VM/ESA 2.1.0 (page 31).
VSE guest ITR and elapsed times are equivalent for the DYNAPACEI/O-intensive batch workload (page 47). These comparisonsinclude results on the 9121-480 2-way processor using theVSE/ESA* 2.1.0 Turbo Dispatcher. ITR and response times areequivalent for the VSECICS transaction processing workload(page 64).
VMSES/E The performance of the VMFBLD function has been improved.Elapsed time reductions ranging from 10% to 24% wereobserved (page 70).
Migration from Other VM Releases: The performance measurement data in thisreport can be used in conjunction with similar data in the four previous VM/ESAperformance reports to get a general understanding of the performance aspectsof migrating from earlier VM releases to VM/ESA 1.1.5 (370 Feature) or VM/ESA2.1.0 (page 74).
New Functions
POSIX usage brings with it an increase in real storage requirements. Forexample, POSIX initialization causes about 640 non-shared pages to bereferenced. A subset of these pages (130 pages, in one sample) continues to bereferenced by subsequent POSIX usage. Processor and I/O requirements dataare provided for a selection of frequently used POSIX functions and shell
6 Copyright IBM Corp. 1995
Summary of Key Findings
commands. A loading test demonstrates that the byte file system server canhandle large numbers of concurrent requests (page 83).
DCE response time and processor usage results are provided for 24 differentRPC cases on 3 different configurations. Loading tests demonstrate that VM DCEservers can handle large numbers of concurrent RPC requests (page 96).
When the new GCS TSLICE option was used to decrease the GCS time slicevalue from 300 milliseconds (default) to 30 milliseconds for the VTAM* machine,there was little effect on system performance (page 111).
Additional Evaluations
The PC Server 500 System/390* can support many CMS users if sufficient realstorage is provided. Example results are shown for a 128MB system using theFS8F0R workload where 190 CMS users are supported with 1-second averageresponse time (page 116).
There is some decrease in processor capacity when CMS users are run in XAmode or XC mode. Comparisons were obtained on VM/ESA 1.2.2 using theFS8F0R workload. Relative to running the CMS users in 370 mode virtualmachines, ITR decreased 2.0% when the CMS users were run in XA mode and3.2% when they were run in XC mode (page 125).
Benchmark runs with the CMS 370 Accommodation Facility (CMS370AC ON)show no measurable change in overall performance relative to CMS370AC OFF.This is for the case where there are no 370-only events that need to besimulated by CP (page 131).
The effective use of minidisk caching (MDC) for guest operating systemsrequires adequate real and/or expanded storage. The use of MDC on a storageconstrained system can result in reduced guest performance unless appropriatetuning actions are taken (page 135).
Measurements indicate that the performance of RSCS 3.2 is equivalent to RSCS3.1. The performance of the new TCPNJE line driver is similar to the SNANJEline driver. The TCP/IP server, however, uses more processor time than theVTAM server (page 140).
For DirMaint 1.5, the DirMaint server has been rewritten in REXX and many newDirMaint functions have been added. As a result, DirMaint processor usage hasincreased substantially from DirMaint 1.4. Total system impact for the measuredCMS-intensive environment was -4.1% ITR and -3.2% ITR with the use ofcompiled REXX for the DirMaint execs (page 145).
Measurements using the FS8F0R CMS-intensive workload show a 0.6% ITRimprovement when migrating from VTAM 3.4.1 to VTAM 4.2.0 (page 153).
Summary of Key Findings 7
Changes That Affect Performance
Changes That Affect Performance
This chapter contains descriptions of various changes to VM/ESA 2.1.0 that affectperformance. This information is equivalent to the information on VM/ESA 2.1.0performance changes found in Appendix E of VM/ESA Performance, withadditional detail plus information that has become available since its publication.
Most of the changes are performance improvements and are listed under“Performance Improvements” on page 9. However, some have the potential toadversely affect performance. These are listed under “PerformanceConsiderations” on page 16. The objectives of these two sections are asfollows:
• Provide a comprehensive list of the significant performance changes.
• Allow installations to assess how their workloads may be affected by thesechanges.
• Describe new functions that applications could exploit to improveperformance.
Throughout the rest of the report, various references are made to these changeswhen discussing the measurement results. These results serve to furtherillustrate where these changes apply and how they may affect performance.
“Performance Management” on page 17 is the third section of this chapter. Itdiscusses changes that affect VM/ESA performance management.
8 Copyright IBM Corp. 1995
Performance Improvements
Performance ImprovementsThe following items improve the performance of VM/ESA.
• CP
− CP Module Linkage Changes− Improved VMCF Interrupt Processing− Virtual Channel to Channel Locking− CP Trace Table Default− MDC Fair Share Limit− Extended Record Cache Support− Improved LOCATEVM Command
• CMS
− Compiled REXX for CMS− CMS Nucleus Restructure− CMS Pipeline Stages in Assembler− Data Compression Support− SUPERSET XEDIT Subcommand
• Other
− GCS PAGEX Support− GCS SET TSLICE Command− VMSES/E Improvements
CP Module Linkage ChangesIn order to reduce overhead in the linkage of CP modules, changes were madein three areas for some frequent linkage cases. The first was to change somedynamic linkages to use fast dynamic linkage. Fast dynamic linkage was firstintroduced in VM/ESA 1.1.1, and is a more efficient method to do CP dynamiclinkage.
The second area was to move some highly used modules from the pageable listto the resident list to save overhead. This increases the size of the residentnucleus slightly.
The last area was to avoid writing trace entries into the trace table for calls fromHCPALLVM. This module is passed the address of a routine and then calls thepassed routine for every VMDBK on the system. This is done for such functionsas CP monitor high frequency user state sampling. By avoiding the traceentries, much overhead is saved and the trace table is not cluttered with lessuseful trace entries.
Improved VMCF Interrupt ProcessingThe amount of processing time required to handle VMCF interrupts has beenreduced. The amount of improvement is directly related to the number ofpending VMCF interrupts. If there are only a few pending interrupts, the effect isnegligible.
When virtual multiple processor support for VMCF was implemented in VM/ESA1.2.1, a large system effect in processing pending VMCF interrupts wasintroduced. This large system effect included storage being obtained and not
Changes That Affect Performance 9
Performance Improvements
released. VM APAR VM58414 was written to address the storage problem. ThisAPAR also results in a slight processor usage reduction. In VM/ESA 2.1.0, theVMCF interrupt processing was redesigned to eliminate the large system effect.
Figure 1 shows measurements of processor time used to process differentnumbers of pending VMCF interrupts in four environments. This figure showsthe large system effects that were introduced in VM/ESA 1.2.1 and somewhatimproved with APAR VM58414. The figure also shows that with the changes inVM/ESA 2.1.0, the large system effect is gone and processor usage is verysimilar to the VM/ESA 1.2.0 environment. Therefore, this improvement onlyapplies to customers who are migrating from VM/ESA 1.2.1 or VM/ESA 1.2.2.
Figure 1. VMCF Interrupt Processing. Processor usage for various numbers of pendingVMCF requests on a 9021-900
Virtual Channel to Channel LockingThe virtual channel to channel (VCTC) locking scheme has been improved. Thenew scheme uses a separate lock word for each VCTC instead of a global lock.This may improve VCTC throughput, especially in cases where a virtual machineis doing a high rate of requests through several VCTCs.
The global lock is still required in certain cases such as processing the COUPLEcommand.
10 VM/ESA 2.1.0 Performance Report
Performance Improvements
CP Trace Table DefaultThe default calculation for the size of the CP trace table has been changed. Thiswill ordinarily result in a smaller trace table for systems that take the default. Asbefore, the trace table size can also be set explicitly.
In the past, the trace table size defaulted to 1/64th of available storage for eachprocessor. With VM/ESA 2.1.0, that calculation is done for the master processor.However, if the result is greater than 100 pages, the master trace table size isset to 100. The trace table size for each alternate processor is then set to 75%of the size of the master processor trace table. Example savings for a 512MB6-way system is over 46MB.
Most people feel that the new calculation represents a much better tradeoffbetween serviceability and real storage usage, especially considering thatVM/ESA reliability has improved dramatically over the last several releases.
MDC Fair Share LimitThis improvement applies only to migrations from VM/ESA 1.2.2. It is alsoavailable on VM/ESA 1.2.2 as APAR VM59590.
Minidisk caching (MDC) has a fair share algorithm to prevent any one user fromflooding the cache with data. This algorithm can be disabled with the NOMDCFSdirectory option. The fair share insert limit is dynamic but has a floor (minimumvalue). Analysis of benchmarks and customer data on VM/ESA 1.2.2 systemsshowed that the floor was too low and that this was degrading systemperformance. Accordingly, the old floor of 8 inserts per minute was increased to150 inserts per minute.
The I/Os excluded due to the fair share limit being exceeded and the current fairshare limit are reported by RTM (MDCACHE screen) and VMPRF(MINIDISK_CACHE_BY_TIME report). If, on your current system, this informationshows that a significant number of I/Os are being excluded from minidiskcaching due to the fair share limit being exceeded, the system may benefit fromthis change.
Extended Record Cache SupportPrior to VM/ESA 2.1.0, VM/ESA supported guest-use of the Record Cache Ifunction in the 3990-6 storage control. This was for guest operating systems thatbuild their own channel programs. Record Cache I support is now extended tousers of DIAGNOSE X′ A4′, DIAGNOSE X′250′, and the block-I/O facility undercertain conditions. When VM knows that data being written meets the controlunit ′ s criteria for ″ regular data format″, VM sets a channel-program indicator toachieve improved performance for I/O requests issued through these I/O-servicefacilities. I/O requests must meet all of the following conditions:
• The request is to write data.
• The request is eligible to be stored in VM ′ s minidisk cache.
Changes That Affect Performance 11
Performance Improvements
• The request is for a track that was previously read and found to be instandard format.2
The record-cache function requires the DASD Fast Write (DFW) function to beenabled and adds to the performance benefits of this function. With DFW, thecontrol unit completes the I/O request almost immediately. The host need notwait for the data to be written (destaged) to the DASD volume, since the data areprotected from loss by residing in the control unit ′ s nonvolatile storage (NVS)while waiting to be destaged. However, if the record is not already in the controlunit ′ s cache when an update is received from the host, the entire track must bestaged (read) into the cache before the I/O request can complete. The additionalperformance advantage of the record-cache function is that staging of the trackcontaining the record can be avoided. Because VM tells the control unit that therecord being written is in a standard format, the control unit knows that therecord will fit within the existing format of the track when the record is ultimatelydestaged from the NVS.
Improved LOCATEVM CommandThe LOCATEVM CP command (class G) can use a very large amount ofprocessor time when a large search range is given. Because of this, use of thiscommand can adversely impact system performance. As a precaution, someinstallations have chosen to reassign this command to a more restrictive class.
In VM/ESA 2.1.0, LOCATEVM processor requirements have been substantiallyreduced (by 75% in one test). While LOCATEVM can still use a significantamount of processor and paging resources, it is now less risky to leave itavailable to class G users.
Compiled REXX for CMSMost of the CMS REXX execs and XEDIT macros on the S-disk are now shippedas compiled REXX files. This includes all files (except SYSPROF EXEC) that arein the CMSINST shared segment and a number of others. They make use of asubset REXX run-time library that is shipped with VM/ESA 2.1.0.
Note: Some of the CMS execs (most notably FILELIST, RDRLIST, and PEEK) arewritten in EXEC2. They remain in EXEC2 and are not affected by this change.
This change can significantly improve the performance of CMS intensiveworkloads that use REXX-implemented CMS functions such as DIRLIST,DISCARD, NAMES, NOTE, RECEIVE, SENDFILE, TELL, and VMLINK, as well asXEDIT macros such as ALL and SPLTJOIN. Processor capacity improvementsexceeding 6% have been observed.
The uncompiled source files are provided on the S-disk for customers who wishto make modifications. Customers with the REXX compiler are advised torecompile the updated files before placing them back onto CMSINST so as toretain the performance advantages.
2 Minidisk caching considers a track to be in standard format when it meets certain criteria. For example, all DASD records onthe track must have the same length and an integral number of these records must fit into a 4KB page. There are othercriteria as well; for a complete definition of standard format, see the minidisk caching chapter in VM/ESA CP DiagnosisReference .
12 VM/ESA 2.1.0 Performance Report
Performance Improvements
CMS Nucleus RestructureThe CMS nucleus was restructured in VM/ESA 2.1.0. This improved performancein a number of ways:
• More of the CMS nucleus has been moved above the 16MB line. This canimprove performance by allowing more use of SAVEFD and by allowing moreshared segments to be created that require space below the 16MB line.
− The CMS shared system now starts at X ′ F00000′ instead of X′ E00000′.
− NLS language repository segments can now reside above the 16MB line.
• In prior releases, the default installation procedure placed the VMLIB,VMMTLIB, and PIPES segments below the 16MB line (even though they canbe run above the line) because of the possibility that CMS could be run in a370 mode machine. With VM/ESA 2.1.0, default installation puts thesesegments above the 16MB line. (VMMTLIB has been integrated into theportion of the CMS saved system that is above the line.) This change freesstorage that was previously taken below the 16MB line.
• CMS now allocates some of its control blocks (such as the IUCV path table)above the 16MB line when such space is available.
• The 370-mode code has been removed from the mainline paths in CMS.
• The fast path through the SVC interrupt handler (DMSITS) has been furtheroptimized.
• The following modules have been moved from the S-disk back into the CMSnucleus:
DMSQRC - query COMDIRDMSQRE - query ENROLLDMSQRF - query CMS (window manager)DMSQRG - query CMS (window manager)DMSQRH - query CMS (window manager)DMSQRN - query NAMEDEFDMSQRP - query FILEPOOLDMSQRQ - query LIMITS, FILEWAIT, RECALLDMSQRT - query AUTOREAD, CMSTYPE, and so forthDMSQRU - query FILEDEF, LABELDEFDMSQRV - query INPUT, OUTPUT, SYNONYMDMSQRW - query libraries (MACLIB, and so forth)DMSQRX - query DOS, DOSPART, UPSI, DLBLDMSSEC - set COMDIRDMSSEF - set CMS (window manager)DMSSML - set/query MACLSUBS
This can benefit the performance of workloads that use these functions ifthey had not previously been used from a shared segment.
Note: In VM/ESA 1.2.2, these modules resided in the CMSQRYL andCMSQRYH logical segments. These segments no longer exist.
Changes That Affect Performance 13
Performance Improvements
CMS Pipeline Stages in AssemblerCMS Pipelines now provide assembler macros that perform basic pipelinefunctions and are the building blocks for writing assembler stage commands.User-written assembler stage commands provide increased performance oversimilar stage commands written in REXX.
Data Compression SupportVM/ESA 2.1.0 includes data compression API support so vendors and customerscan more easily create applications that exploit the use of compression services.Both a macro interface (CSRCMPSC) and a CSL interface (DMSCPR) areprovided. Use of this support can save DASD space, tape storage space, andtransmission line costs. The increase in processing time associated with datacompression and expansion is greatly reduced on processors that havehardware compression (CMPSC instruction).
In addition, CMS and GCS support the VSE/VSAM Version 6 Release 1.0interface for data compression. Using the COMPRESS parameter of the DEFINEfunction causes VSAM to automatically expand or compress data during a VSAMread or write operation, respectively. When available on the processor, theCMPSC instruction is used for this purpose. CMS and GCS system users canread and write to VSAM files that have been compressed under the control ofthe VSE/VSAM program. No application program changes are necessary.
SUPERSET XEDIT SubcommandThis new XEDIT subcommand performs the same function as the existing SETsubcommand. However, it can be used to set multiple options in one invocation.The following CMS Productivity Aids were changed to use this subcommand:FILELIST, RDRLIST, NOTE, SENDFILE, PEEK, DIRLIST, and the EXECUTE XEDITmacro. It can also be used to improve the performance of user-writtenapplications that include performance-sensitive XEDIT macros.
GCS PAGEX SupportYou can now make use of the CP PAGEX facility with GCS. PAGEX is specifiedon a virtual machine basis. When PAGEX is ON and a given GCS task takes apage fault, GCS will dispatch other active GCS tasks in the virtual machine whilewaiting for that page fault to be resolved. This can result in increased capacityfor that virtual machine to do work.
PAGEX is especially useful in cases where a virtual machine has a large numberof GCS tasks and these tasks are active on an intermittent basis. A goodexample would be an RSCS machine with many line drivers.
If this is not the case, SET RESERVE remains the best method to minimize theeffects of paging. SET RESERVE works best when the virtual machine′ sreference pattern has good locality of reference and its working set size doesnot change much over time. In intermediate cases, the best tuning solutionmight be to use a combination of PAGEX ON and SET RESERVE. SET RESERVEwould be used to protect the most frequently used pages, while PAGEX ONwould be used to keep those page faults that do occur from serializing the wholevirtual machine.
14 VM/ESA 2.1.0 Performance Report
Performance Improvements
Note: PAGEX is not recommended for the VTAM machine because most VTAMexecution is on one GCS task.
GCS SET TSLICE CommandIn prior releases, the GCS time slice was fixed at 300 milliseconds. WithVM/ESA 2.1.0, 300 milliseconds is retained as the default setting but this can bealtered for any given virtual machine in a GCS group by using the new SETTSLICE GCS command.
A smaller time slice setting can be used to help avoid time-out situations whenmultiple tasks are involved. You can estimate whether the default time slicesetting is likely to result in a time-out situation. For example, if the QUERYTSLICE command shows 100 active tasks, the maximum delay before a giventask is run is 100 times 0.300, or 30 seconds. If this is more than the linetime-out limit, you should set the time slice lower.
Note: Setting the time slice lower than it needs to be will tend to increase GCSdispatching overhead.
VMSES/E ImprovementsThe performance of the VMFBLD function has been improved. Elapsed time andprocessor time reductions exceeding 20% have been observed. Thisimprovement was first introduced through VM/ESA 1.2.2 APAR VM57938.
The performance of VMFCOPY has been improved by providing an SPRODIDoption. In prior releases, all files that met the fn ft fm criteria were copiedregardless of what product they belonged to. When you specify the SPRODIDoption, only those files that belong to the specified product are copied.
The automation of more service processing in VMSES/E 2.1.0 eliminates certainmanual tasks. Therefore, the overall time required to do these tasks willdecrease. See “VMSES/E” on page 70 for a list of tasks that have beenautomated by VMSES/E 2.1.0.
Changes That Affect Performance 15
Performance Considerations
Performance ConsiderationsThese items warrant consideration since they have potential for a negativeimpact to performance.
• Additional CMS Paging Space Requirements• CMS Working Set Size Increase• Potential Overlap of CMS with Shared Segments
Additional CMS Paging Space RequirementsThe number of pages that are referenced during IPL CMS but are (typically)unused thereafter has increased by about 12. This increases DASD pagingspace requirements to some extent. Since these referenced pages mustultimately be paged out, they can also reduce performance in situations wherelarge numbers of CMS users are logging on over a short period of time.
Many additional virtual pages in the user ′ s virtual machine are referenced whenthe POSIX environment is initialized. This occurs implicitly when the first POSIXrequest is made. Nearly all of these additional pages are no longer referenced ifthere are no subsequent POSIX requests. However, these pages will add to thenumber of occupied page slots on DASD. This leads to the following tworecommendations:
1. If many users are (even occasionally) using the POSIX environment, take alook at whether the system ′ s page space is still sufficient.
2. Do not put POSIX-oriented commands such as OPENVM MOUNT in yourPROFILE EXEC unless you will normally be using POSIX functionssubsequent to starting CMS.
CMS Working Set Size IncreaseCMS references more non-shared pages than it did in VM/ESA 1.2.2. This willtend to increase paging, especially in storage-constrained environments withlarge numbers of CMS users. For the CMS-intensive FS8F measurementsreported in “CMS-Intensive” on page 25, working set increases ranging from 1%to 10% were observed.
The main reason for this increase is that CMS Pipelines now does a CMSmultitasking call as part of its initialization. This means that users who do notuse pipelines or who are already running CMS multitasking will not experiencethe pipelines-related working set increase.
Potential Overlap of CMS with Shared SegmentsIn VM/ESA 1.2.2, the CMS saved system occupied megabytes E, F, and 10. InVM/ESA 2.1.0, it occupies megabytes F, 10, 11, and 12. If your installation hasdefined any shared segments in megabytes 11 or 12, they will need to be movedin order to avoid overlapping CMS.
16 VM/ESA 2.1.0 Performance Report
Performance Management
Performance ManagementThese changes affect the performance management of VM/ESA.
• Monitor Enhancements• Dynamic Allocation of Subchannel Measurement Blocks• SET THROTTLE Command• QUERY FILEPOOL Command Extensions for BFS• Accounting Data• VM Performance Products
Monitor EnhancementsA number of new monitor records and fields have been added. Some of themore significant changes are summarized below. For a complete list ofchanges, see the MONITOR LIST1403 file (on MAINT′ s 194 disk) for VM/ESA 2.1.0.
• User State Sampling
A number of changes were made to improve the usefulness of the user statesampling data.
− Users doing diagnose I/O used to show up as being in simulation wait.They now appear in I/O wait.
− Users in CP SLEEP or CP READ used to be shown as being in consolefunction mode wait. They now appear as idle.
− A new state, active page wait, has been added for virtual machines thathave a page request outstanding but can handle it with PAGEX orasynchronous page fault handling.
• CP Configurability II
The CP Configurability II support allows I/O devices to be added or removedfrom the I/O hardware configuration while VM/ESA is running. In order totrack these changes, several new event I/O domain records were added,such as the delete device record (domain 6 record 15, D6/R15).
A measurement block, sometimes referred to as a subchannel measurementblock, is a control block that is associated with a given device. It containsmeasurement data for the device such as the I/O rate and timing values forthe various components of service time. The hardware is responsible forupdating this information. From the measurement block information,performance products can compute the device ′ s service time, I/O rate, andutilization. With the CP Configurability II support, it is now possible for agiven device to not have an associated measurement block. Accordingly,information has been added to the monitor to indicate when this is the case.
The new SET SCMEASURE command allows an administrator to enable ordisable the collection of subchannel measurement data for a specific deviceor range of devices. An event record is created each time the SETSCMEASURE command is used.
Changes That Affect Performance 17
Performance Management
• SET THROTTLE
Monitor fields have been added in support of the new SET THROTTLEcommand. This includes:
− whether a device has been throttled
− the throttle rate for a device
− the number of times I/O was delayed on a given device
− the number of times a given user had I/O delayed due to throttle
• RAMAC* Support
Monitor support for RAMAC is available for VM/ESA 1.2.1 and VM/ESA 1.2.2through development APAR VM59200. This support is integrated intoVM/ESA 2.1.0. Since RAMAC DASD appear to VM as either 3380s or 3390s,additional fields have been added to the device configuration data record(D1/R6) and the vary on device record (D6/R1) to indicate the actual DASDand control unit type where possible. Cache activity data records (D6/R4)have been made available for the RAMAC subsystem.
• SFS APPLDATA
The APPLDATA domain monitor data contributed by SFS filepool servers hasbeen extended to include counts and timings that pertain to the byte filesystem. These include byte file request counts for each type of request, lockconflict counts for each type of byte file lock conflict, and token callbackinformation.
• CMS Multitasking APPLDATA
CMS multitasking can contribute application data to the monitor in theAPPLDATA (10) monitor domain. This includes the following information:
− Thread creation and deletion counts and timings
− Thread switch rates
− Number of blocked threads
− Highest number of threads and POSIX processes in use
Dynamic Allocation of Subchannel Measurement BlocksThe I/O service times and related information for a device are computed fromdata found in its associated subchannel measurement block, which the hardwareis responsible for updating. With the new functions provided by CPConfigurability II, there can now be scenarios where there is not a subchannelmeasurement block associated with a device. In such cases, the service timesand related data are not available and are shown as zeros in the monitor data.
SET THROTTLE CommandThis is a new CP command that can be used to set a maximum rate at which thesystem ′ s virtual machines, in aggregate, are permitted to initiate I/Os to a givendevice. This limit does not apply to I/Os initiated by CP. CP converts thespecified rate into an interval representing the minimum time that must passafter one I/O is started before the next I/O to that device can start. If CPreceives an I/O request to a device that has been limited by SET THROTTLE, that
18 VM/ESA 2.1.0 Performance Report
Performance Management
I/O request is delayed, if necessary, until the minimum time interval hascompleted.
In multi-system configurations which have shared channels, control units, ordevices, SET THROTTLE can be used to help prevent any one system fromoverutilizing the shared resources.
QUERY FILEPOOL Command Extensions for BFSInformation has been added to the QUERY FILEPOOL commands to provide bytefile system performance information. In particular, byte file system counts andtimings have been added to QUERY FILEPOOL REPORT and its subset, QUERYFILEPOOL COUNTER.
Accounting DataThe following list describes fields in the virtual machine resource usageaccounting record (type 01) that may be affected by performance changes inVM/ESA 2.1.0. The columns where the field is located are shown in parentheses.
Milliseconds of processor time used (33-36)This is the total processor time charged to a user and includes both CPand emulation time. For most workloads, this should not change much asa result of the changes made in VM/ESA 2.1.0. Exception: CMS intensiveworkloads that make significant use of DIRLIST, DISCARD, NAMES, NOTE,RECEIVE, SENDFILE, TELL, and VMLINK, and/or XEDIT macros such as ALLand SPLTJOIN. Such workloads can experience a significant reduction intotal processor time arising from CMS ′ s use of compiled REXX. Most ofthis decrease will be virtual processor time.
Milliseconds of virtual processor time (37-40)This is the virtual time charged to a user. See the above discussion oftotal processor time.
Requested virtual nonspooled I/O starts (49-52)This is a total count of requested starts. All requests may not complete.The value of this field could change, depending on the system I/Ocharacteristics, because of two changes made to CP:
• In previous releases, this counter was incremented for each real I/Odone. This included the scenario where CP splits a virtual I/O into aseparate real I/O for each cylinder involved. In VM/ESA 2.1.0, thiscounter will be incremented only once per virtual I/O.
• In the past, virtual I/Os eligible for minidisk caching that experienced acache miss were not always being counted. This has been corrected.
Completed virtual nonspooled I/O starts (73-76)This is a total count of completed requests. The previous discussion of“requested virtual nonspooled I/O starts” also applies to this field.
Changes That Affect Performance 19
Performance Management
VM Performance ProductsVM Performance Reporting Facility 1.2.1 (VMPRF) requires APAR VM59656 (PTFUM27312) to run on VM/ESA 2.1.0. VMPRF at this service level includes thefollowing functional enhancements:
• 3990-6 cache controller support
• RAMAC support
• LE/370 support
• page active and limit list data have been added to the state sample reports
Realtime Monitor VM/ESA 1.5.2 (RTM/ESA) requires APAR GC05374 (PTFUG03792) to run on VM/ESA 2.1.0. It can be run on CMS11 (or earlier) in 370mode or on CMS12 in XA mode with CMS370AC on. If it was built on CMS12 andis run on CMS12, it will set CMS370AC on. RTM/ESA at this service level can bebuilt using HLASM Release 2. Support for RAMAC DASD has been added.
Performance Analysis Facility/VM 1.1.3 (VMPAF) will run on VM/ESA 2.1.0 withthe same support as VM/ESA 1.2.2.
20 VM/ESA 2.1.0 Performance Report
Measurement Information
Measurement Information
This chapter discusses the types of processors used for measurements in thereport, the level of software used, the configuration details associated with eachmeasurement, and the licensed programs and tools that were used in runningand evaluating the performance measurements.
HardwareThe following processors were measured.
• 9121-742
• 9121-480
This processor was used for the 9121-480 and 9121-320 measurements. Torun as a 9121-320, one processor was varied offline.
• 9221-170
• PC Server 500 System/390
SoftwareUnless otherwise noted, a pre-general-availability level of VM/ESA 2.1.0 wasused for the measurements in this report. Not all of the VM/ESA 2.1.0measurements in this report were made with the same level of code. As theproduct developed, newer code levels were made that supplanted the level thathad been in use. In any evaluation section that compares VM/ESA 2.1.0 to itself,the same level of code was maintained. Keep this in mind when trying tocompare results that are taken from different sections.
Other releases of VM were measured for this report. VM/ESA 1.2.2 was at theGA+first-RSU level (General Availability, Recommended Service Upgrade tape).The service that was part of VM/ESA 1.2.2 after the first RSU level and integratedinto VM/ESA 2.1.0 can account for some of the difference between VM/ESA 1.2.2and VM/ESA 2.1.0.
See the appropriate workload section in Appendix A, “Workloads” on page 157for the other licensed programs ′ software levels.
Format DescriptionThis part of the report contains a general explanation of the configuration detailsthat are associated with each measurement.
For each group of measurements there are five sections:
1. Workload: This specifies the name of the workload associated with themeasurement. For more detail on the workload, see Appendix A,“Workloads” on page 157.
2. Hardware Configuration: This summarizes the hardware configuration andcontains the following descriptions:
• Processor model: The model of the processor.
Copyright IBM Corp. 1995 21
Measurement Information
• Processors used: The number of processors used.
• Storage: The amount of real and expanded storage used on theprocessor.
− Real: The amount of real storage used on the processor.
Any real storage not defined for the specific measurement wasconfigured as expanded storage and attached to an idle user.
− Expanded: The amount of expanded storage used on the processor.
• Tape: The type of tape drive and the tape′ s purpose.
• DASD: The DASD configuration used during the measurement.
The table indicates the type of DASD used during the measurement, typeof control units that connect these volumes to the system, the number ofpaths between the processor and the DASD, and the distribution of theDASD volumes for PAGE, SPOOL, TDSK, USER, SERVER and SYSTEM.An ″R″ or ″W″ next to the DASD counts means Read or Write cachingenabled, respectively.
• Communications: The type of control unit, number of communicationcontrol units, number of lines per control unit, and the line speed.
3. Software Configuration: This section contains pertinent software information.
• Driver: The tool used to simulate users.
• Think time distribution: The type of distribution used for the user thinktimes.
Bactrian This type of think time distribution represents a combination ofboth active and inactive user think times. The distributionincludes long think times that occur when the user is notactively issuing commands. Actual user data were collectedand used as input to the creation of the Bactrian distribution.This type of mechanism allows the transaction rate to varydepending on the command response times in themeasurement.
• CMS block size: The block size of the CMS minidisks.
• Virtual Machines: The virtual machines used in the measurement.
For each virtual machine, the table indicates the following: name,number used, type, size and mode, share of the system resourcesscheduled, number of pages reserved, and any other options that wereset.
4. Measurement Discussion: This contains an analysis of the performance datain the table and gives the overall performance findings.
5. Measurement Data: This contains the table of performance results. Thesedata were obtained or derived from the tools listed in “Tools Description” onpage 24.
There are several cases where the same information is reported from twosources because the sources calculate the value in a slightly differentmanner. For example, consider the external throughput rate measures, ETR(T) and ETR, that are based on the command rate calculated by TPNS and
22 VM/ESA 2.1.0 Performance Report
Measurement Information
RTM, respectively. TPNS can directly count the command rate as it runs thecommands in the scripts. RTM, on the other hand, reports the command(transaction) rate that is determined by the CP scheduler, which has to makeassumptions about when transactions begin and end. This can make thecounts reported by RTM vary in meaning from run to run and vary from thevalues reported by TPNS. As a result, the analysis of the data is principallybased on the TPNS command rate. Furthermore, some values in the table(like TOT INT ADJ) are normalized to the TPNS command rate in an effort toget the most accurate performance measures possible.
There are instances in these tables where two variables are equal yet thereappears a non-zero number for their difference or percent difference. Thisindicates that the variables are only equal when they were rounded off to thesignificant digits that appear in the table.
Performance terms listed in the tables and discussed in this part of thedocument are defined in the glossary.
Measurement Information 23
Measurement Information
Tools DescriptionA variety of licensed programs and internal tools were used to evaluate theperformance measurements. The programs used in the measurements arelisted below.
CICSPARS CICS* Performance Analysis Reporting System, providesCICS response time and transaction information.
EXPLORE** Monitors and reports performance data for VSE systems.
FSTTAPE Reduces hardware monitor data for the 9121 processors.
Hardware Monitor Collects branch, event, and timing data.
MONFAST Collects and reports branch, event, and timing data on a 9221processor.
REDFP Consolidates the QUERY FILEPOOL STATUS data.
RTM Real Time Monitor, records and reports performance data forVM systems.
SPM/2 System Performance Monitor/2 provides performance datafor an OS/2* system.
STARS System Trace Analysis Reports, provides various reportsbased on the analysis of instruction trace data.
TPNS Teleprocessing Network Simulator is a terminal and networksimulation tool.
TPNS Reduction ProgramReduces the TPNS log data to provide performance, load,and response time information.
VMPRF VM Performance Reporting Facility is the VM monitorreduction program.
24 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
Migration from VM/ESA 1.2.2
This chapter explores the performance effects of migrating from VM/ESA 1.2.2 toVM/ESA 2.1.0. The following environments were measured: CMS-intensive, VSEguest, and VMSES/E.
CMS-IntensiveVM/ESA 2.1.0 has improved internal throughput rates and response times for theCMS-intensive environments measured. The ITR improvements resulting fromdecreased processor use can be attributed mostly to the use of compiled REXXexecs and compiled XEDIT macros from the CMS system disk (a non-compiledcomparison can be found in “9121-480 / Minidisk” on page 31). The followingenhancements also contributed:
• CP Module Linkage Changes
• CMS Nucleus Restructure
• SUPERSET XEDIT Subcommand
For more information on these and other performance-related enhancements inVM/ESA 2.1.0, see “Changes That Affect Performance” on page 8.
The internal throughput rates and response times for these measurements areshown in Figure 2 and Figure 3 on page 26.
Figure 2. Internal throughput rate for the various CMS-intensive environments
Copyright IBM Corp. 1995 25
Migration: CMS-Intensive
Figure 3. External response time for the various CMS-intensive environments
26 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
9121-742 / MinidiskWorkload: FS8F0R
Hardware Configuration
Processor model: 9121-742Processors used: 4Storage:
Real: 1024MB (default MDC)Expanded: 1024MB (BIAS 0.1)
Tape: 3480 (Monitor)
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Measurement Discussion: Several performance enhancements have been madein VM/ESA 2.1.0. The use of compiled REXX by CMS has provided most of theperformance gain shown in the measurements within this section. The workloadused (FS8F0R) includes a variety of these execs. For a comparison without theeffects of the REXX compiler see “9121-480 / Minidisk” on page 31.
There has been some pathlength growth within GCS, causing increases in theprocessor usage within the VTAM machine. This has mostly been caused byinclusion of APARs to GCS.
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-3 4 6 7 7 32 R 2 R3390-2 3990-2 4 16 6 6
Control Unit NumberLines per
Control Unit Speed
3088 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
SMART 1 RTM 16MB/370 3 % 500 QUICKDSP ONVSCSn 3 VSCS 64MB/XA 10000 1200 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XA 10000 550 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 5500 Users 3MB/XC 100
Migration from VM/ESA 1.2.2 27
Migration: CMS-Intensive
There is a working set growth within the CMS virtual machine due to the additionof new function that was not offset by performance improvements. See “CMSWorking Set Size Increase” on page 16 for more information. The increased CPprocessor usage is mostly caused by an increase in paging, produced by thisworking set growth.
Note that these measurements were made with an MDC BIAS value of 0.1 forexpanded storage (using the SET MDCACHE command). Studies previouslymade have determined that this improves overall system performance. Formore information refer to the VM/ESA Release 2.2 Performance Report.
The following table shows that VM/ESA 2.1.0 compared to VM/ESA 1.2.2 hasimproved its overall performance characteristics. The key indicators of externalresponse time (AVG LAST(T)) and internal throughput rate (ITR(H)) bothimproved. The external response time improved by 13.3% and the internalthroughput improved by 4.2%.
Table 1 (Page 1 of 3). Minidisk-only CMS-intensive migration from VM/ESA 1.2.2 onthe 9121-742
ReleaseRun ID
1.2.2S47E550D
2.1.0S48E5500 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
1024MB1024MB
5500134
1024MB1024MB
5500134
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1070.3500.2500.2470.2760.361
0.1120.3350.2510.2340.2360.313
0.005-0.0150.001
-0.012-0.040-0.048
4.67%-4.29%0.40%
-5.05%-14.51%-13.28%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.11190.39192.91
0.987213.12
52.6178.831.0001.000
26.09180.15193.00
0.933222.12
51.8982.021.0420.986
-0.02-10.24
0.09-0.054
9.00-0.723.19
0.042-0.014
-0.06%-5.38%0.05%
-5.42%4.22%
-1.36%4.05%4.22%
-1.36%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
18.76918.765
6.6596.220
12.11012.545
18.00917.979
7.0376.580
10.97211.399
-0.760-0.7860.3780.360
-1.138-1.146
-4.05%-4.19%5.67%5.78%
-9.40%-9.13%
28 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
Table 1 (Page 2 of 3). Minidisk-only CMS-intensive migration from VM/ESA 1.2.2 onthe 9121-742
ReleaseRun ID
1.2.2S47E550D
2.1.0S48E5500 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
1024MB1024MB
5500134
1024MB1024MB
5500134
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
362.08362.00
90.5290.50
233.62242.00
92.6893.0039.6941.00
1.551.50
347.57347.00
86.8986.75
211.76220.00
88.9489.0034.4036.00
1.641.58
-14.50-15.00
-3.63-3.75
-21.86-22.00
-3.73-4.00-5.28-5.000.090.08
-4.01%-4.14%-4.01%-4.14%-9.36%-9.09%-4.03%-4.30%
-13.31%-12.20%
5.90%5.44%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2572KB800KB
73233K42.4
156170.92
1799
2756KB800KB
80232K42.2
161920.92
1885
184KB0KB
7-1K-0.2575
0.0086
7.15%0.00%9.59%
-0.43%-0.43%3.68%
-0.11%4.78%
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
643480
5.821169.300
0.878823
142911.674
8.994
1016760
9.202308.800
1.600595
155811.155
8.601
373280
3.381139.500
0.722-228129
-0.518-0.393
58.01%58.33%58.07%82.40%82.31%
-27.70%9.03%
-4.44%-4.37%
QueuesDISPATCH LISTELIGIBLE LIST
101.180.02
101.500.00
0.32-0.02
0.31%-100.00%
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
17949.300
5442.8201.942
19.90033.263.5552
26512
0.92
18479.570
6933.5911.991
20.20030.263.6596
26556
0.93
530.270
1490.7710.0480.300
-3.00.144
044
0.01
2.95%2.91%
27.39%27.33%
2.49%1.51%
-8.97%0.10%7.97%0.00%8.59%1.09%
Migration from VM/ESA 1.2.2 29
Migration: CMS-Intensive
Table 1 (Page 3 of 3). Minidisk-only CMS-intensive migration from VM/ESA 1.2.2 onthe 9121-742
ReleaseRun ID
1.2.2S47E550D
2.1.0S48E5500 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
1024MB1024MB
5500134
1024MB1024MB
5500134
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
20.38125.628
0.9170.7491.1260.0241.2480.3243.5712.814
13.70857.02137.63444.792
20.70323.968
0.9330.7501.1250.0251.2480.3823.7692.826
11.68056.99438.75645.165
0.322-1.6600.0160.0010.0000.0000.0000.0580.1980.011
-2.028-0.0271.1220.372
1.58%-6.48%1.70%0.13%
-0.02%0.18%0.01%
17.90%5.54%0.40%
-14.79%-0.05%2.98%0.83%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
41402.75021.22681.5234
0.324
41403.04371.31471.7290
0.381
00.29350.08790.2056
0.057
0.00%10.67%
7.16%13.50%17.63%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
30 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
9121-480 / MinidiskWorkload: FS8F0R
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage:
Real: 256MB (default MDC)Expanded: 0MB
Tape: 3480 (Monitor)
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Measurement Discussion: As in the previous section, the dominatingperformance improvement is the compiled CMS system REXX execs and XEDITmacros contained on the S-disk. Also shown later in this section is acomparison to VM/ESA 1.2.2 without the use of these compiled execs.
The following table shows that VM/ESA 2.1.0 compared to VM/ESA 1.2.2 hasimproved its overall performance characteristics. The key indicators of externalresponse time (AVG LAST(T)) and internal throughput rate (ITR(H)) bothimproved. The external response time improved by 20.5% and the internalthroughput improved by 7.0%.
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 6 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
Control Unit NumberLines per
Control Unit Speed
3088-08 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
SMART 1 RTM 16MB/370 3 % 400 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XC 10000 560 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 1900 Users 3MB/XC 100
Migration from VM/ESA 1.2.2 31
Migration: CMS-Intensive
Table 2 (Page 1 of 2). Minidisk-only CMS-intensive migration from VM/ESA 1.2.2 onthe 9121-480
ReleaseRun ID
1.2.2L27E1909
2.1.0L28E190M Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1300.4560.3450.3070.2720.388
0.1230.3670.2860.2510.2240.309
-0.007-0.089-0.059-0.056-0.048-0.080
-5.38%-19.52%-17.10%-18.17%-17.65%-20.46%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1559.3466.670.89073.7632.8547.651.0001.000
26.1658.7566.870.87978.9334.6951.811.0701.056
0.01-0.590.20
-0.0115.171.844.16
0.0700.056
0.06%-0.99%0.30%
-1.29%7.00%5.61%8.72%7.00%5.61%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
27.11327.147
9.0098.399
18.10418.748
25.33925.272
8.9588.374
16.38116.898
-1.775-1.876-0.051-0.025-1.723-1.851
-6.54%-6.91%-0.57%-0.30%-9.52%-9.87%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
180.77181.00
90.3990.50
120.71125.00
89.9690.0053.3555.00
1.501.45
169.45169.00
84.7284.50
109.55113.00
84.0784.0047.8850.00
1.551.50
-11.32-12.00
-5.66-6.00
-11.16-12.00
-5.90-6.00-5.47-5.000.050.05
-6.26%-6.63%-6.26%-6.63%-9.25%-9.60%-6.56%-6.67%
-10.26%-9.09%3.29%3.29%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2572KB400KB
8455084
29.054060.95
1313
2756KB400KB
8655088
29.055700.97
1361
184KB0KB
24
0.0164
0.0248
7.15%0.00%2.38%0.01%0.01%3.03%1.91%3.66%
32 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
Table 2 (Page 2 of 2). Minidisk-only CMS-intensive migration from VM/ESA 1.2.2 onthe 9121-480
ReleaseRun ID
1.2.2L27E1909
2.1.0L28E190M Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
606445
15.763181.700
2.72500
0.0008.954
669450
16.733187.900
2.81000
0.0008.524
635
0.9706.2000.085
00
0.000-0.431
10.40%1.12%6.15%3.41%3.10%
nanana
-4.81%
QueuesDISPATCH LISTELIGIBLE LIST
41.980.00
36.880.02
-5.110.02
-12.16%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
67110.064
3935.8943.169
19.20041.4
0.0191
9.55180
0.94
69910.453
3895.8173.007
19.70039.5
0.0207
9.49196
0.94
280.389
-4-0.077-0.1620.500
-1.90.016
-0.0616
0.00
4.17%3.86%
-1.02%-1.31%-5.11%2.60%
-4.52%na
8.38%-0.63%8.89%0.00%
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.90627.861
2.4010.7521.1260.0251.2481.0813.5902.823
13.66554.07034.60549.495
13.85626.448
2.4800.7481.1260.0241.2471.2503.8022.832
11.70753.80335.51050.110
-0.051-1.4130.080
-0.0030.0000.000
-0.0010.1690.2120.009
-1.958-0.2670.9050.614
-0.36%-5.07%3.31%
-0.44%-0.01%-1.10%-0.10%15.63%
5.91%0.31%
-14.33%-0.49%2.62%1.24%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5593.79961.44152.3581
1.081
5514.15381.52032.6335
1.250
-80.35420.07880.2754
0.170
-1.43%9.32%5.47%
11.68%15.69%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
Migration from VM/ESA 1.2.2 33
Migration: CMS-Intensive
The following table compares VM/ESA 2.1.0 without using the compiled versionof the REXX execs and XEDIT macros to VM/ESA 1.2.2. This comparison maymore closely represent the performance that can be expected if your systemalready utilizes the REXX compiler for the REXX programs and XEDIT macrosfound on the CMS system disk. The key indicators of external response time(AVG LAST(T)) and internal throughput rate (ITR(H)) both show a slightdegradation. The external response time increased by 2.1% and the internalthroughput decreased by 1.1%.
Table 3 (Page 1 of 3). Minidisk-only CMS-intensive migration (without compliedREXX) from VM/ESA 1.2.2 on the 9121-480
ReleaseRun ID
1.2.2L27E1909
2.1.0L28E190K Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1300.4560.3450.3070.2720.388
0.1360.4680.3560.3170.2800.397
0.0060.0120.0110.0100.0080.008
4.62%2.63%3.19%3.18%2.94%2.06%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1559.3466.670.89073.7632.8547.651.0001.000
26.1959.3366.670.89072.9632.4746.820.9890.989
0.04-0.010.00
0.000-0.80-0.38-0.83
-0.011-0.011
0.15%-0.02%-0.01%-0.01%-1.09%-1.14%-1.75%-1.09%-1.14%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
27.11327.147
9.0098.399
18.10418.748
27.41127.449
9.0008.400
18.41019.050
0.2980.302
-0.0080.0010.3060.301
1.10%1.11%
-0.09%0.01%1.69%1.61%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
180.77181.00
90.3990.50
120.71125.00
89.9690.0053.3555.00
1.501.45
182.74183.00
91.3791.50
122.74127.00
91.0191.0054.8057.00
1.491.44
1.972.000.991.002.032.001.051.001.452.00
-0.01-0.01
1.09%1.10%1.09%1.10%1.68%1.60%1.16%1.11%2.71%3.64%
-0.58%-0.49%
34 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
Table 3 (Page 2 of 3). Minidisk-only CMS-intensive migration (without compliedREXX) from VM/ESA 1.2.2 on the 9121-480
ReleaseRun ID
1.2.2L27E1909
2.1.0L28E190K Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2572KB400KB
8455084
29.054060.95
1313
2756KB400KB
9055050
29.056020.96
1319
184KB0KB
6-340.0
1960.01
6
7.15%0.00%7.14%
-0.06%-0.06%3.63%1.33%0.46%
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
606445
15.763181.700
2.72500
0.0008.954
672461
16.995205.400
3.08100
0.0009.240
6616
1.23123.700
0.35600
0.0000.286
10.89%3.60%7.81%
13.04%13.05%
nanana
3.19%
QueuesDISPATCH LISTELIGIBLE LIST
41.980.00
41.370.00
-0.610.00
-1.46%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
67110.064
3935.8943.169
19.20041.4
0.0191
9.55180
0.94
67810.170
3885.8202.739
19.40038.0
0.0203
9.47192
0.94
70.106
-5-0.075-0.4300.200
-3.40.012
-0.0812
0.00
1.04%1.05%
-1.27%-1.26%
-13.58%1.04%
-8.28%na
6.28%-0.84%6.67%0.00%
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.90627.861
2.4010.7521.1260.0251.2481.0813.5902.823
13.66554.07034.60549.495
13.89028.046
2.4000.7501.1260.0251.2471.0443.7602.802
13.74254.59934.39749.559
-0.0160.185
-0.001-0.0010.0000.000
-0.001-0.0360.171
-0.0210.0770.529
-0.2070.064
-0.11%0.67%
-0.05%-0.18%-0.01%-0.18%-0.09%-3.37%4.75%
-0.75%0.56%0.98%
-0.60%0.13%
Migration from VM/ESA 1.2.2 35
Migration: CMS-Intensive
Table 3 (Page 3 of 3). Minidisk-only CMS-intensive migration (without compliedREXX) from VM/ESA 1.2.2 on the 9121-480
ReleaseRun ID
1.2.2L27E1909
2.1.0L28E190K Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5593.79961.44152.3581
1.081
5734.04161.48332.5583
1.045
140.24200.04180.2002-0.036
2.50%6.37%2.90%8.49%
-3.29%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
36 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
9121-480 / SFSWorkload: FS8FMAXR
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage:
Real: 256MB (default MDC)Expanded: 0MB
Tape: 3480 (Monitor)
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Measurement Discussion: Internal throughput (ITR(H)) improved by 4.9%, whileexternal response time (AVG LAST(T)) improved by 12%. These improvementswere mostly due to the use of compiled REXX by CMS in VM/ESA 2.1.0.
The percentage ITR improvement is somewhat less than the 7.0% improvementobserved for the corresponding minidisk-only measurements (“9121-480 /Minidisk” on page 31). The primary reason for this is that, in the SFS case, thesame absolute decrease in processor time per command resulting from the
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 6 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
Control Unit NumberLines per
Control Unit Speed
3088-08 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
CRRSERV1 1 SFS 16MB/XC 100ROSERV1 1 SFS 32MB/XC 100 QUICKDSP ONRWSERVn 2 SFS 64MB/XC 1500 1300 QUICKDSP ONSMART 1 RTM 16MB/370 3 % 400 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XA 10000 512 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 1620 Users 3MB/XC 100
Migration from VM/ESA 1.2.2 37
Migration: CMS-Intensive
compiled REXX item is divided by a larger base processor time per command,resulting in a smaller percentage decrease. In addition, there has been someincrease in SFS server processor usage due to service and the byte file systemsupport. This has offset some of the processor usage improvement from thecompiled REXX item.
The 5% increase in SFS server working set is due to an in-memory SFS calltrace table that was added in VM/ESA 2.1.0.
Table 4 (Page 1 of 3). SFS CMS-intensive migration from VM/ESA 1.2.2 on the9121-480
ReleaseRun ID
1.2.2L27S1625
2.1.0L28S1625 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB00MB
1620102
256MB00MB
1620102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1210.4400.3330.2920.2360.341
0.1150.3780.2910.2550.2100.299
-0.006-0.062-0.042-0.038-0.026-0.042
-4.96%-14.09%-12.61%-12.85%-11.23%-12.44%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1950.0657.070.87763.8828.0241.311.0001.000
26.1749.8657.000.87567.0229.3344.531.0491.047
-0.02-0.20-0.07
-0.0023.141.303.23
0.0490.047
-0.10%-0.40%-0.13%-0.27%4.92%4.65%7.82%4.92%4.65%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
31.31131.36610.69610.16320.61421.203
29.84329.82710.84510.17618.99819.651
-1.467-1.5390.1480.013
-1.616-1.552
-4.69%-4.91%1.39%0.13%
-7.84%-7.32%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
178.68179.00
89.3489.50
117.64121.00
89.1589.0053.0455.00
1.521.48
170.09170.00
85.0585.00
108.28112.00
84.5785.0048.4350.00
1.571.52
-8.59-9.00-4.30-4.50-9.36-9.00-4.58-4.00-4.61-5.000.050.04
-4.81%-5.03%-4.81%-5.03%-7.95%-7.44%-5.14%-4.49%-8.68%-9.09%3.42%2.60%
38 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
Table 4 (Page 2 of 3). SFS CMS-intensive migration from VM/ESA 1.2.2 on the9121-480
ReleaseRun ID
1.2.2L27S1625
2.1.0L28S1625 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB00MB
1620102
256MB00MB
1620102
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2572KB400KB
8156281
34.747410.92
1471
2756KB400KB
8256106
34.648710.95
1542
184KB0KB
1-175-0.1130
0.0371
7.15%0.00%1.23%
-0.31%-0.31%2.74%3.06%4.83%
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
524365
15.578140.100
2.45500
0.0008.709
550372
16.177143.300
2.51400
0.0008.281
267
0.5993.2000.059
00
0.000-0.428
4.96%1.92%3.84%2.28%2.41%
nanana
-4.91%
QueuesDISPATCH LISTELIGIBLE LIST
39.230.00
34.930.00
-4.300.00
-10.97%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
58010.163
3325.8183.363
18.30067.3
0.0151
14126
0.83
59610.457
3456.0533.539
18.20065.6
0.0162
14137
0.84
160.294
130.2350.176
-0.100-1.80.011
011
0.01
2.76%2.89%3.92%4.05%5.24%
-0.55%-2.61%
na7.28%0.00%8.73%1.20%
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
20.39026.113
2.6600.7481.1480.0241.2481.3121.9832.613
13.24060.98041.46752.604
20.50824.363
2.6570.7511.1480.0251.2491.4322.1642.601
11.11260.93542.65453.268
0.118-1.750-0.0030.0030.0000.0000.0010.1200.181
-0.013-2.128-0.0451.1880.663
0.58%-6.70%-0.10%0.40%
-0.02%1.13%0.09%9.12%9.14%
-0.49%-16.07%
-0.07%2.86%1.26%
Migration from VM/ESA 1.2.2 39
Migration: CMS-Intensive
The SFS counts and timings in the following two tables are provided tosupplement the information provided above. These were acquired by issuing theQUERY FILEPOOL STATUS command once at the beginning of the measurementinterval and once at the end. The QUERY FILEPOOL STATUS information wasobtained for each SFS file pool server and the CRR recovery server. The countsand timings for each server were added together. A description of the QUERYFILEPOOL STATUS output can be found in SFS and CRR Planning,Administration, and Operation.
Table 5 consists of counts and timings that are normalized by the number ofcommands (as determined by TPNS). The beginning values were subtractedfrom the ending values and divided by the number of commands in themeasurement interval. Counts and timings that have a value of zero for allmeasurements are not shown. A zero entry indicates that at least oneoccurrence was counted but the result of normalizing per command is so smallthat it rounds to zero.
Table 4 (Page 3 of 3). SFS CMS-intensive migration from VM/ESA 1.2.2 on the9121-480
ReleaseRun ID
1.2.2L27S1625
2.1.0L28S1625 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB00MB
1620102
256MB00MB
1620102
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5054.00111.50892.4922
1.312
5024.35711.58882.7683
1.433
-30.35600.07990.2761
0.121
-0.59%8.90%5.30%
11.08%9.19%
SFS ServersWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)FP REQ/CMD(Q)IO/CMD (Q)IO TIME/CMD (Q)SFS TIME/CMD (Q)
32053.19311.43111.7620
1.1191.5780.0210.027
33643.33361.46211.8715
1.1421.5890.0220.030
1590.14050.03100.1095
0.0230.0110.0010.003
4.96%4.40%2.17%6.21%2.06%0.70%4.76%
11.11%
Note: T=TPNS, V=VMPRF, H=Hardware Moni tor , Q=Query F i lepool Counters ,U n m a r k e d = R T M
40 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
Table 5 (Page 1 of 2). SFS CMS-intensive migration from VM/ESA 1.2.2 on the9121-480
ReleaseRun ID
1.2.2L27S1625
2.1.0L28S1625 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB00MB
1620102
256MB00MB
1620102
Close File RequestsCommit RequestsConnect RequestsDelete File RequestsLock RequestsOpen File New RequestsOpen File Read RequestsOpen File Replace RequestsOpen File Write RequestsQuery File Pool RequestsQuery User Space RequestsRead File RequestsRefresh Directory RequestsRename RequestsUnlock RequestsWrite File Requests
0.36380.01630.00780.07340.02480.00330.21880.12050.02120.00000.02120.14510.02270.00490.02460.0504
0.36170.01630.00780.07320.02470.00330.21670.12070.02120.00000.02120.17150.02270.00490.02460.0508
-0.00210.00000.0000
-0.0002-0.00010.0000
-0.00210.00020.00000.00000.00000.02640.00000.00000.00000.0004
-0.58%0.00%0.00%
-0.27%-0.40%0.00%
-0.96%0.17%0.00%
na0.00%
18.19%0.00%0.00%0.00%0.79%
Total File Pool RequestsFile Pool Request Service TimeLocal File Pool Requests
1.118827.3494
1.1188
1.141630.1353
1.1416
0.02282.78590.0228
2.04%10.19%
2.04%
Begin LUWsAgent Holding Time (msec)SAC Calls
0.443489.3498
5.4329
0.441981.6177
5.4533
-0.0015-7.73210.0204
-0.34%-8.65%0.38%
Catalog Lock ConflictsTotal Lock ConflictsLock Wait Time (msec)
0.00050.00050.0128
0.00110.00110.0245
0.00060.00060.0117
120.00%120.00%
91.41%
File Blocks ReadFile Blocks WrittenCatalog Blocks ReadCatalog Blocks WrittenControl Minidisk Blocks WrittenLog Blocks WrittenTotal DASD Block Transfers
0.90470.50010.49530.25690.04990.45692.6638
0.89830.49680.48480.25260.04960.45652.6386
-0.0064-0.0033-0.0105-0.0043-0.0003-0.0004-0.0252
-0.71%-0.66%-2.12%-1.67%-0.60%-0.09%-0.95%
BIO Requests to Read File BlockBIO Requests to Write File BlocksBIO Requests to Read Catalog BlksBIO Requests to Write Catalog BlksBIO Requests to Write Ctl Mdsk BlksBIO Requests to Write Log BlocksTotal BIO RequestsTotal BIO Request Time (msec)
0.39120.17920.49530.20810.00200.39801.6739
21.4774
0.41390.17830.48480.20360.00200.39771.6804
21.6018
0.0227-0.0009-0.0105-0.00450.0000
-0.00030.00650.1244
5.80%-0.50%-2.12%-2.16%0.00%
-0.08%0.39%0.58%
I/O Requests to Read File BlocksI/O Requests to Write File BlocksI/O Requests to Read Catalog BlksI/O Requests to Write Catalog BlksI/O Requests to Write Ctl Mdsk BlksI/O Requests to Write Log BlocksTotal I/O Requests
0.26870.19640.49530.21500.00390.39841.5776
0.29820.19340.48480.21040.00390.39801.5887
0.0295-0.0030-0.0105-0.00460.0000
-0.00040.0111
10.98%-1.53%-2.12%-2.14%0.00%
-0.10%0.70%
Migration from VM/ESA 1.2.2 41
Migration: CMS-Intensive
Table 5 (Page 2 of 2). SFS CMS-intensive migration from VM/ESA 1.2.2 on the9121-480
ReleaseRun ID
1.2.2L27S1625
2.1.0L28S1625 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB00MB
1620102
256MB00MB
1620102
Get Logname RequestsGet LUWID RequestsTotal CRR RequestsCRR Request Service Time (msec)Log I/O Requests
0.00320.00320.00650.07550.0065
0.00320.00320.00650.07560.0065
0.00000.00000.00000.00010.0000
0.00%0.00%0.00%0.13%0.00%
Note: Query Filepool Counters — normalized by command
42 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
Table 6 consists of derived relationships that were calculated from acombination of two or more individual counts or timings. See the glossary fordefinitions of these derived values.
Table 6. SFS CMS-intensive migration from VM/ESA 1.2.2 on the 9121-480
ReleaseRun ID
1.2.2L27S1625
2.1.0L28S1625 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB00MB
1620102
256MB00MB
1620102
Agents HeldAgents In-callAvg LUW Time (msec)Avg File Pool Request Time (msec)Avg Lock Wait Time (msec)SAC Calls / FP Request
5.11.6
201.524.425.64.86
4.71.7
184.726.422.34.78
-0.40.2
-16.82.0
-3.3-0.08
-8.77%10.05%-8.34%7.99%
-13.00%-1.63%
Deadlocks (delta)Rollbacks Due to Deadlock (delta)Rollback Requests (delta)LUW Rollbacks (delta)
0000
000
781
000
781
nananana
Checkpoints Taken (delta)Checkpoint Duration (sec)Seconds Between CheckpointsCheckpoint Util ization
321.8
60.22.9
322.6
60.24.3
00.80.01.3
0.00%45.23%
0.00%45.23%
BIO Request Time (msec)Blocking Factor (Blocks/BIO)Chaining Factor (Blocks/IO)
12.831.591.69
12.861.571.66
0.02-0.02-0.03
0.19%-1.33%-1.64%
Note: Query Filepool Counters — derived results
Migration from VM/ESA 1.2.2 43
Migration: CMS-Intensive
9221-170 / MinidiskWorkload: FS8F0R
Hardware Configuration
Processor model: 9221-170Processors used: 1Storage:
Real: 64MB (default MDC)Expanded: 0MB
Tape: 3480 (Monitor)
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Measurement Discussion: The following table shows that VM/ESA 2.1.0compared to VM/ESA 1.2.2 has improved its overall performance characteristics.The external response time (AVG LAST(T)) decreased by 7.2% and the internalthroughput rate (ITR(H)) improved by 3.8%.
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 1 16 6 63390-2 3990-3 1 2 2 8 R 2 R
Control Unit NumberLines per
Control Unit Speed
3088 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
SMART 1 RTM 16MB/370 3 % 350 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XA 10000 300 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 300 Users 3MB/XC 100
44 VM/ESA 2.1.0 Performance Report
Migration: CMS-Intensive
Table 7 (Page 1 of 2). Minidisk-only CMS-intensive migration from VM/ESA 1.2.2 onthe 9221-170
ReleaseRun ID
1.2.2H17E0304
2.1.0H18E0303 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
64MB0MB
300101
64MB0MB
300101
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.2300.9760.7490.6390.3870.680
0.2210.9090.7040.5990.3580.631
-0.009-0.067-0.045-0.040-0.029-0.049
-3.91%-6.86%-6.01%-6.21%-7.49%-7.21%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
28.038.90
10.430.85311.7210.0014.901.0001.000
28.098.89
10.440.85112.1710.3715.861.0381.037
0.06-0.010.01
-0.0020.450.370.96
0.0380.037
0.21%-0.11%0.10%
-0.21%3.84%3.65%6.45%3.84%3.65%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
85.35785.31033.69527.79851.66357.512
82.20082.35333.93128.72848.26953.625
-3.157-2.9570.2360.930
-3.393-3.887
-3.70%-3.47%0.70%3.35%
-6.57%-6.76%
Processor Util.TOTAL (H)TOTALTOTAL EMUL (H)TOTAL EMULTVR(H)TVR
89.0589.0053.9060.00
1.651.48
85.8486.0050.4156.00
1.701.54
-3.21-3.00-3.49-4.000.050.05
-3.60%-3.37%-6.48%-6.67%3.07%3.53%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2556KB200KB
8713506
45.0953
0.891101
2740KB200KB
9013427
44.8972
0.901199
184KB0KB
3-79
-0.319
0.0198
7.20%0.00%3.45%
-0.58%-0.58%1.99%0.82%8.90%
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
8469
14.66626.300
2.52100
0.0008.819
8670
14.93927.000
2.58600
0.0008.427
21
0.2730.7000.065
00
0.000-0.392
2.38%1.45%1.86%2.66%2.56%
nanana
-4.44%
Migration from VM/ESA 1.2.2 45
Migration: CMS-Intensive
Table 7 (Page 2 of 2). Minidisk-only CMS-intensive migration from VM/ESA 1.2.2 onthe 9221-170
ReleaseRun ID
1.2.2H17E0304
2.1.0H18E0303 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
64MB0MB
300101
64MB0MB
300101
QueuesDISPATCH LISTELIGIBLE LIST
11.370.00
11.380.00
0.010.00
0.05%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
11711.215
696.6144.093
24.20011.9
0.030
1.5127
0.91
11911.395
706.7034.118
24.40011.2
0.032
1.4929
0.92
20.180
10.0890.0250.200
-0.70.0
2-0.02
20.01
1.71%1.61%1.45%1.35%0.60%0.83%
-5.77%na
6.67%-1.32%7.41%1.10%
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
14.51432.992
6.2680.7471.1280.0241.2512.2863.5872.821
13.72963.45544.41958.759
14.49031.268
6.3160.7471.1290.0241.2512.2703.7952.827
11.66962.91444.66958.988
-0.023-1.7240.0480.0000.0010.0000.001
-0.0160.2080.006
-2.060-0.5410.2500.229
-0.16%-5.23%0.77%0.00%0.10%
-0.10%0.05%
-0.69%5.79%0.23%
-15.01%-0.85%0.56%0.39%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
28817.6531
6.896210.7570
2.287
27418.3805
6.995811.3848
2.270
-140.72740.09960.6278-0.017
-4.86%4.12%1.44%5.84%
-0.73%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
46 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
VSE/ESA GuestIn the following three sections, VSE/ESA* 2.1.03 guest performance measurementresults are presented and discussed for the DYNAPACE batch workload and theVSECICS transaction processing workload. These sections compare VSE/ESAguest performance on VM/ESA 1.2.2 to VM/ESA 2.1.0 and to VSE/ESA native.
9121-320 / DYNAPACEThis section examines VSE/ESA 2.1.0 guest performance of VM/ESA 1.2.2compared to VM/ESA 2.1.0 running the DYNAPACE workload on a 9121-320.DYNAPACE is a batch-only workload and is characterized by heavy I/O. SeeAppendix A, “Workloads” on page 157 for a detailed description of theworkload. Because the 9121-320 is a uniprocessor, the VSE/ESA StandardDispatcher is used.
Figure 4. VSE guest migration from VM/ESA 1.2.2. DYNAPACE workload on a singleVSE/ESA guest of VM/ESA 1.2.2 and VM/ESA 2.1.0 on the 9121-320 processor.
The VSE DYNAPACE workload was run as a guest of VM/ESA 1.2.2 and VM/ESA2.1.0 in three modes: V=R, V=V, and V=V with the No Page Data Set (NPDS)option. The V=R guest environment had dedicated DASD with I/O assist. Thetwo V=V guest environments were configured with full pack minidisk DASD withminidisk caching (MDC).
3 For more information on VSE/ESA 2.1.0 performance, refer to VSE/ESA 2.1.0 Performance Considerations.
Migration from VM/ESA 1.2.2 47
Migration: VSE/ESA Guest
For each guest environment, internal throughput rates were equivalent toVM/ESA 1.2.2 within measurement variability.
When comparing guest ITR to VSE native, the V=R guest, with this workload,achieves about 88% of native. The V=V guest achieves about 71% of nativeperformance running with or without the NPDS option.
Figure 5. VSE guest migration from VM/ESA 1.2.2. DYNAPACE workload on a singleVSE/ESA guest of VM/ESA 1.2.2 and VM/ESA 2.1.0 on the 9121-320 processor.
Figure 5 shows elapsed time comparisons of the various guest modes runningunder VM/ESA 1.2.2 and VM/ESA 2.1.0. The elapsed time duration of the batchjobs remains unchanged (within run variability) for all environments. Thebenefits of MDC are demonstrated in the ETR results for the V=V guestenvironments.
48 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Workload: DYNAPACE
Hardware Configuration
Processor models: 9121-3204
StorageReal: 256MBExpanded: 0MB
DASD:
Software Configuration
VSE version: 2.1.0 (using the Standard Dispatcher)
Virtual Machines:
Additional Information: Starting with VM/ESA 1.2.2, minidisk caching (MDC)became available for use with non-CMS guests. Therefore, all V=V guestmeasurements in this section were run with MDC active.
For all guest measurements in this section, VSE/ESA was run in an ESA virtualmachine and the VSE supervisor was defined as MODE=ESA. The guest wasrun in three modes:
• V=R, mode=ESA
• V=V, mode=ESA
• V=V, mode=ESA NPDS (No Page Data-Set)
All DASD are dedicated to the VSE V=R guest for these measurements (exceptfor the VM system DASD volumes). All V=V measurement environments weredefined with full pack minidisks and MDC. The VM system used for these guestmeasurements has a 96MB V=R area defined. For measurements with V=Vguests, the V=R area is configured, but not used. Therefore, if the real storage
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK VSAM VSE Sys. VM Sys.
3380-A 3880-03 2 13390-2 3990-02 4 10 23380-K 3990-03 4 10
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
VSEVR 1 VSE V=R 96MB/ESA 100 IOASSIST ONCCWTRANS OFF
orVSEVV 1 VSE V=V 96MB/ESA 100 IOASSIST OFF
orVSEVV 1 VSE V=V NPDS 224MB/ESA 100 IOASSIST OFF
SMART 1 RTM 16MB/370 100WRITER 1 CP monitor 2MB/XA 100
4 See “Hardware” on page 21 for an explanation of how this processor model was defined.
Migration from VM/ESA 1.2.2 49
Migration: VSE/ESA Guest
configuration on the processor is 256MB, then 160MB of useable storage isavailable for the VM system and V=V guest. For the V=V measurements, it isthis effective real storage size that is shown in this section ′ s measurementresults tables.
Measurement Results: The VSE guest measurement results are provided in thefollowing tables. The VSE native results are provided in Table 17 on page 69.
Table 8 (Page 1 of 2). VSE/ESA V = R guest migration from VM/ESA 1.2.2 on9121-320, DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L1R78PF0
2.1.0L1R88PF0 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA256MB
0MBESA96M
V = RESA
1
ESA256MB
0MBESA96M
V = RESA
1
Throughput (Min)Elapsed Time (C)ETR (C)ITR (H)ITRITRR (H)ITRR
870.07.72
16.4816.431.0001.000
888.07.57
16.4916.451.0001.001
18.0-0.160.010.02
0.0000.001
2.07%-2.03%0.04%0.10%0.04%0.10%
Proc. Usage (Sec)PBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
3.6403.6510.2750.2333.3663.418
3.6393.6470.2810.2383.3583.409
-0.001-0.0040.0070.005
-0.008-0.009
-0.04%-0.10%2.50%2.07%
-0.25%-0.25%
Processor Util.TOTAL (H)TOTALTOTAL EMUL (H)TOTAL EMULTVR(H)TVR
46.8747.0043.3344.00
1.081.07
45.9046.0042.3543.00
1.081.07
-0.97-1.00-0.98-1.000.000.00
-2.07%-2.13%-2.27%-2.27%0.21%0.15%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2584KB200KB38624
820.55
1053
2768KB200KB38803
820.55
1052
184KB0KB179
00.00
-1
7.12%0.00%0.46%0.00%0.55%
-0.09%
PagingPAGE/CMDXSTOR/CMDFAST CLR/CMD
0.0000.0000.000
0.0000.0000.000
0.0000.0000.000
nanana
50 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Table 8 (Page 2 of 2). VSE/ESA V = R guest migration from VM/ESA 1.2.2 on9121-320, DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L1R78PF0
2.1.0L1R88PF0 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA256MB
0MBESA96M
V = RESA
1
ESA256MB
0MBESA96M
V = RESA
1
I /OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)DASD IO TOTAL (V)DASD IO RATE (V)DASD IO/CMD (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
1.0007.7682.000
15.53615831.76
13.667.30.0
0.030.030.010.40
1.0007.9292.000
15.85716711.86
14.726.00.0
0.030.030.010.30
0.0000.1610.0000.321
880.101.06-1.20.0
000
-0.10
0.00%2.07%0.00%2.07%5.56%5.56%7.74%
-17.10%na
0.00%0.00%0.00%
-25.00%
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
10.884606.089
2679.9112117.129
528.214
10.884618.143
2671.9292110.824
531.214
0.00012.054-7.982-6.3063.000
0.00%1.99%
-0.30%-0.30%0.57%
Note: V=VMPRF, H=Hardware Moni to r , C=VSE conso le , Unmarked=RTM
Migration from VM/ESA 1.2.2 51
Migration: VSE/ESA Guest
Table 9 (Page 1 of 2). VSE/ESA V = V guest migration from VM/ESA 1.2.2 on9121-320, DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L1V78PF0
2.1.0L1V88PF0 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA160MB
0MBESA96M
V = VESA
1
ESA160MB
0MBESA96M
V = VESA
1
Throughput (Min)Elapsed Time (C)ETR (C)ITR (H)ITRITRR (H)ITRR
542.012.4013.2213.931.0001.000
537.012.5113.1213.170.9930.946
-5.00.12
-0.09-0.76
-0.007-0.054
-0.92%0.93%
-0.71%-5.44%-0.71%-5.44%
Proc. Usage (Sec)PBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
4.5394.3071.1771.0163.3623.291
4.5714.5551.2061.0553.3663.500
0.0330.2480.0280.0390.0040.209
0.72%5.76%2.41%3.80%0.12%6.36%
Processor Util.TOTAL (H)TOTALTOTAL EMUL (H)TOTAL EMULTVR(H)TVR
93.7989.0069.4668.00
1.351.31
95.3595.0070.2073.00
1.361.30
1.556.000.735.000.01
-0.01
1.66%6.74%1.06%7.35%0.59%
-0.57%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2584KB200KB38544
1010.61
91
2768KB200KB38719
1040.60
51
184KB0KB175
3-0.01
-40
7.12%0.00%0.45%2.97%
-2.10%-43.96%
PagingPAGE/CMDXSTOR/CMDFAST CLR/CMD
82.2680.000
222.607
177.4020.000
234.938
95.1340.000
12.330
115.64%na
5.54%
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)DASD IO TOTAL (V)DASD IO RATE (V)DASD IO/CMD (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
610.0002951.964
270.0001306.607
145053268.62
1299.91112.6
0.0372185349
0.87
651.0003121.313
272.0001304.143
146252270.84
1298.57113.4
0.0397198373
0.87
41.000169.348
2.000-2.464
11992.22
-1.350.80.0251324
0
6.72%5.74%0.74%
-0.19%0.83%0.83%
-0.10%0.68%
na6.72%7.03%6.88%0.00%
52 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Table 9 (Page 2 of 2). VSE/ESA V = V guest migration from VM/ESA 1.2.2 on9121-320, DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L1V78PF0
2.1.0L1V88PF0 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA160MB
0MBESA96M
V = VESA
1
ESA160MB
0MBESA96M
V = VESA
1
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
2949.616458.144
13027.35711333.801
3614.946
3118.813471.902
13775.00911984.258
3787.768
169.19613.757
747.652650.457172.821
5.74%3.00%5.74%5.74%4.78%
Note: V=VMPRF, H=Hardware Moni to r , C=VSE conso le , Unmarked=RTM
Table 10 (Page 1 of 2). VSE/ESA V = V NPDS guest migration from VM/ESA 1.2.2 on9121-320 DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L1O78PF3
2.1.0L1O88PF0 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA160MB
0MBESA
224MV = VESA
1
ESA160MB
0MBESA
224MV = VESA
1
Throughput (Min)Elapsed Time (C)ETR (C)ITR (H)ITRITRR (H)ITRR
560.012.0013.1913.191.0001.000
540.012.4413.1213.100.9950.993
-20.00.44
-0.07-0.09
-0.005-0.007
-3.57%3.70%
-0.54%-0.66%-0.54%-0.66%
Proc. Usage (Sec)PBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
4.5484.5501.1941.1003.3543.450
4.5734.5801.2081.1093.3653.471
0.0250.0300.0130.0090.0110.021
0.54%0.67%1.12%0.81%0.33%0.62%
Migration from VM/ESA 1.2.2 53
Migration: VSE/ESA Guest
Table 10 (Page 2 of 2). VSE/ESA V = V NPDS guest migration from VM/ESA 1.2.2 on9121-320 DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L1O78PF3
2.1.0L1O88PF0 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA160MB
0MBESA
224MV = VESA
1
ESA160MB
0MBESA
224MV = VESA
1
Processor Util.TOTAL (H)TOTALTOTAL EMUL (H)TOTAL EMULTVR(H)TVR
90.9691.0067.0869.00
1.361.32
94.8495.0069.7972.00
1.361.32
3.884.002.723.000.000.00
4.26%4.40%4.05%4.35%0.21%0.05%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2584KB200KB38767
1030.61317
2768KB200KB38723
1040.60143
184KB0KB
-441
-0.01-174
7.12%0.00%
-0.11%0.97%
-2.15%-54.89%
PagingPAGE/CMDXSTOR/CMDFAST CLR/CMD
120.0000.000
235.000
188.0360.000
236.250
68.0360.0001.250
56.70%na
0.53%
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)DASD IO TOTAL (V)DASD IO RATE (V)DASD IO/CMD (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
624.0003120.000
258.0001290.000
153980256.63
1283.17112.9
0.0381190358
0.87
648.0003124.286
282.0001359.643
151742281.00
1354.84113.2
0.0395197372
0.87
24.0004.286
24.00069.643-223824.3771.67
0.20.014
714
0
3.85%0.14%9.30%5.40%
-1.45%9.50%5.59%0.19%
na3.67%3.68%3.91%0.00%
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
3119.125482.500
13810.00012014.700
3795.000
3120.384472.429
13760.35711971.511
3780.000
1.259-10.071-49.643-43.189-15.000
0.04%-2.09%-0.36%-0.36%-0.40%
Note: V=VMPRF, H=Hardware Moni to r , C=VSE conso le , Unmarked=RTM
54 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
9121-480 / DYNAPACEThis section examines VSE/ESA 2.1.0 guest performance of VM/ESA 1.2.2compared to VM/ESA 2.1.0 running the DYNAPACE workload on a 9121-480.Because the 9121-480 has two processors, the VSE/ESA Turbo Dispatcher5 isused.
Figure 6. VSE guest migration from VM/ESA 1.2.2. DYNAPACE workload on a singleVSE/ESA guest of VM/ESA 1.2.2 and VM/ESA 2.1.0 on the 9121-480 processor.
The VSE DYNAPACE workload was run as a guest of VM/ESA 1.2.2 and VM/ESA2.1.0 in three modes: V=R, V=V, and V=V with the No Page Data Set (NPDS)option. The V=R guest environment had dedicated DASD with I/O assist. Thetwo V=V guest environments were configured with full pack minidisk DASD withminidisk caching (MDC). The VSE/ESA Turbo Dispatcher was enabled with 2processors active. All Turbo environments were run with un-dedicatedprocessors.
For each guest mode, internal throughput rates were equivalent to VM/ESA 1.2.2within measurement variability.
5 For more information on the VSE/ESA Turbo Dispatcher, refer to VSE/ESA 2.1 Turbo Dispatcher Performance.
Migration from VM/ESA 1.2.2 55
Migration: VSE/ESA Guest
When comparing guest ITR to VSE native, the V=R guest, with this workload,achieves about 91% of native. The V=V guest achieves about 63% of nativeperformance running with or without the NPDS option.
Figure 7. VSE guest migration from VM/ESA 1.2.2. DYNAPACE workload on a singleVSE/ESA guest of VM/ESA 1.2.2 and VM/ESA 2.1.0 on the 9121-480 processor.
Figure 7 shows elapsed time comparisons of the various guest modes runningunder VM/ESA 1.2.2 and VM/ESA 2.1.0. The elapsed time duration of the batchjobs remains unchanged (within run variability) for all environments. Thebenefits of MDC are demonstrated in the ETR results for the V=V guestenvironments.
56 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Workload: DYNAPACE
Hardware Configuration
Processor models: 9121-480Storage
Real: 256MBExpanded: 0MB
DASD:
Software Configuration
VSE version: 2.1.0 (using the Turbo Dispatcher)
Virtual Machines:
Additional Information: Starting with VM/ESA 1.2.2, minidisk caching (MDC)became available for use with non-CMS guests. Therefore, all V=V guestmeasurements in this section were run with MDC active.
For all guest measurements in this section, VSE/ESA was run in an ESA virtualmachine and the VSE supervisor was defined as MODE=ESA. The guest wasrun in three modes:
• V=R, mode=ESA
• V=V, mode=ESA
• V=V, mode=ESA NPDS (No Page Data-Set)
All DASD are dedicated to the VSE V=R guest for these measurements (exceptfor the VM system DASD volumes). All V=V measurement environments weredefined with full pack minidisks and MDC. The VM system used for these guestmeasurements has a 96MB V=R area defined. For measurements with V=Vguests, the V=R area is configured, but not used. Therefore, if the real storageconfiguration on the processor is 256MB, then 160MB of useable storage isavailable for the VM system and V=V guest. For the V=V measurements, it is
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK VSAM VSE Sys. VM Sys.
3380-A 3880-03 2 13390-2 3990-02 4 10 23380-K 3990-03 4 10
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
VSEVR 1 VSE V=R 96MB/ESA 100 IOASSIST ONCCWTRANS OFFUNDEDICATE
orVSEVV 1 VSE V=V 96MB/ESA 100 IOASSIST OFF
orVSEVV 1 VSE V=V NPDS 224MB/ESA 100 IOASSIST OFF
SMART 1 RTM 16MB/370 100WRITER 1 CP monitor 2MB/XA 100
Migration from VM/ESA 1.2.2 57
Migration: VSE/ESA Guest
this effective real storage size that is shown in this section ′ s measurementresults tables.
Measurement Results: The VSE guest measurement results are provided in thefollowing tables. The VSE native results are provided in Table 17 on page 69.
Table 11 (Page 1 of 2). VSE/ESA V = R guest migration from VM/ESA 1.2.2 on9121-480, DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L2R78PF2
2.1.0L2R88PF1 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA256MB
0MBESA96M
V = RESA
2
ESA256MB
0MBESA96M
V = RESA
2
Throughput (Min)Elapsed Time (C)ETR (C)ITR (H)ITRITRR (H)ITRR
954.07.04
28.8228.751.0001.000
897.07.49
28.8928.811.0021.002
-57.00.450.070.06
0.0020.002
-5.97%6.35%0.24%0.22%0.24%0.22%
Proc. Usage (Sec)PBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
4.1634.1740.4010.3413.7623.833
4.1544.1650.3980.4003.7563.764
-0.010-0.009-0.0030.060
-0.006-0.069
-0.23%-0.22%-0.82%17.53%-0.17%-1.80%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
48.8849.0024.4424.5044.1745.0021.4221.0019.7120.00
1.111.09
51.8652.0025.9326.0046.8947.0023.9324.0022.0622.00
1.111.11
2.983.001.491.502.732.002.513.002.342.000.000.02
6.10%6.12%6.10%6.12%6.17%4.44%
11.70%14.29%11.89%10.00%-0.06%1.61%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2584KB400KB38559
890.54
1054
2776KB400KB38528
870.55
1054
192KB0KB
-31-2
0.010
7.43%0.00%
-0.08%-2.25%1.24%0.00%
PagingPAGE/CMDXSTOR/CMDFAST CLR/CMD
0.0000.0000.000
0.0000.0000.000
0.0000.0000.000
nanana
58 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Table 11 (Page 2 of 2). VSE/ESA V = R guest migration from VM/ESA 1.2.2 on9121-480, DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L2R78PF2
2.1.0L2R88PF1 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA256MB
0MBESA96M
V = RESA
2
ESA256MB
0MBESA96M
V = RESA
2
I /OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)DASD IO TOTAL (V)DASD IO RATE (V)DASD IO/CMD (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
1.0008.5182.000
17.03617441.82
15.476.00.0
0.030.010.000.27
1.0008.0092.000
16.01815321.82
14.615.90.0
0.030.010.000.26
0.000-0.5090.000
-1.018-2120.01
-0.87-0.10.0
000
-0.01
0.00%-5.97%0.00%
-5.97%-12.16%
0.39%-5.61%-1.94%
na0.00%0.00%
na-3.70%
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
11.435668.750
2785.3392172.565
596.250
11.009641.482
2715.0272090.571
544.607
-0.426-27.268-70.313-81.994-51.643
-3.72%-4.08%-2.52%-3.77%-8.66%
Note: V=VMPRF, H=Hardware Moni to r , C=VSE conso le , Unmarked=RTM
Migration from VM/ESA 1.2.2 59
Migration: VSE/ESA Guest
Table 12 (Page 1 of 2). VSE/ESA V = V guest migration from VM/ESA 1.2.2 on9121-480, DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L2V78PF2
2.1.0L2V88PF3 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA160MB
0MBESA96M
V = VESA
2
ESA160MB
0MBESA96M
V = VESA
2
Throughput (Min)Elapsed Time (C)ETR (C)ITR (H)ITRITRR (H)ITRR
541.012.4220.2720.361.0001.000
545.012.3320.2520.210.9990.993
4.0-0.09-0.01-0.15
-0.001-0.007
0.74%-0.73%-0.05%-0.73%-0.05%-0.73%
Proc. Usage (Sec)PBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
5.9225.8931.6651.4974.2574.396
5.9255.9371.6881.5574.2364.379
0.0030.0440.0230.060
-0.020-0.016
0.05%0.74%1.38%3.99%
-0.47%-0.37%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
122.59122.00
61.3061.0088.1291.0061.5561.0044.6746.00
1.391.34
121.75122.00
60.8861.0087.0690.0061.1061.0044.1846.00
1.401.36
-0.840.00
-0.420.00
-1.06-1.00-0.450.00
-0.500.000.010.01
-0.68%0.00%
-0.68%0.00%
-1.20%-1.10%-0.72%0.00%
-1.11%0.00%0.52%1.11%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2584KB400KB38487
1070.60201
2768KB400KB38456
1080.58331
184KB0KB
-311
-0.02130
7.12%0.00%
-0.08%0.93%
-3.62%64.68%
PagingPAGE/CMDXSTOR/CMDFAST CLR/CMD
115.9290.000
318.804
92.4550.000
321.161
-23.4730.0002.357
-20.25%na
0.74%
60 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Table 12 (Page 2 of 2). VSE/ESA V = V guest migration from VM/ESA 1.2.2 on9121-480, DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L2V78PF2
2.1.0L2V88PF3 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA160MB
0MBESA96M
V = VESA
2
ESA160MB
0MBESA96M
V = VESA
2
I /OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)DASD IO TOTAL (V)DASD IO RATE (V)DASD IO/CMD (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
645.0003115.580
281.0001357.330
151432280.43
1354.58112.5
0.0395196369
0.86
642.0003124.018
275.0001338.170
148149274.30
1322.76111.6
0.0393195371
0.87
-3.0008.438
-6.000-19.161
-3283-6.13
-31.82-0.90.0-2-12
0.01
-0.47%0.27%
-2.14%-1.41%-2.17%-2.18%-2.35%-0.77%
na-0.51%-0.51%0.54%1.16%
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
3120.286728.357
16457.02715140.465
3791.830
3120.393715.125
16637.09815139.759
3756.607
0.107-13.232180.071
-0.705-35.223
0.00%-1.82%1.09%0.00%
-0.93%
Note: V=VMPRF, H=Hardware Moni to r , C=VSE conso le , Unmarked=RTM
Migration from VM/ESA 1.2.2 61
Migration: VSE/ESA Guest
Table 13 (Page 1 of 2). VSE/ESA V = V NPDS guest migration from VM/ESA 1.2.2 on9121-480 DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L2O78PF2
2.1.0L2O88PF2 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA160MB
0MBESA
224MV = VESA
2
ESA160MB
0MBESA
224MV = VESA
2
Throughput (Min)Elapsed Time (H)ETR (H)ITR (H)ITRITRR (H)ITRR
567.011.8520.0920.091.0001.000
561.011.9820.0219.960.9960.994
-6.00.13
-0.08-0.12
-0.004-0.006
-1.06%1.07%
-0.38%-0.61%-0.38%-0.61%
Proc. Usage (Sec)PBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
5.9725.9741.6841.5194.2884.455
5.9956.0111.6841.5534.3114.458
0.0230.0370.0000.0340.0230.003
0.39%0.62%
-0.02%2.24%0.54%0.07%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
117.96118.00
58.9859.0084.7088.0058.4358.0042.4744.00
1.391.34
119.69120.00
59.8460.0086.0789.0060.0460.0043.7045.00
1.391.35
1.722.000.861.001.371.001.612.001.241.000.000.01
1.46%1.69%1.46%1.69%1.62%1.14%2.76%3.45%2.91%2.27%
-0.16%0.55%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2584KB400KB38484
1070.62
37
2768KB400KB38455
1090.59363
184KB0KB
-292
-0.03326
7.12%0.00%
-0.08%1.87%
-4.81%881.08%
PagingPAGE/CMDXSTOR/CMDFAST CLR/CMD
207.5630.000
243.000
105.1870.000
315.562
-102.3750.000
72.562
-49.32%na
29.86%
62 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Table 13 (Page 2 of 2). VSE/ESA V = V NPDS guest migration from VM/ESA 1.2.2 on9121-480 DYNAPACE workload.
VM/ESA ReleaseRun ID
1.2.2L2O78PF2
2.1.0L2O88PF2 Difference %Difference
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA160MB
0MBESA
224MV = VESA
2
ESA160MB
0MBESA
224MV = VESA
2
I /OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)DASD IO TOTAL (V)DASD IO RATE (V)DASD IO/CMD (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
617.0003123.563
270.0001366.875
145094268.69
1360.26112.1
0.0377187355
0.87
622.0003115.554
275.0001377.455
147802273.71
1370.98112.3
0.0381189359
0.87
5.000-8.0095.000
10.58027085.01
10.720.10.0
4240
0.81%-0.26%1.85%0.77%1.87%1.87%0.79%0.13%
na1.06%1.07%1.13%0.00%
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
3118.402768.429
16478.43815160.163
3766.500
3119.393756.187
16489.39315170.241
3771.723
0.991-12.24110.95510.079
5.223
0.03%-1.59%0.07%0.07%0.14%
Note: V=VMPRF, H=Hardware Moni to r , C=VSE conso le , Unmarked=RTM
Migration from VM/ESA 1.2.2 63
Migration: VSE/ESA Guest
9121-320 / VSECICSThis section examines VSE/ESA 2.1.0 guest performance running under VM/ESA2.1.0 compared to VM/ESA 1.2.2. The VSECICS workload is used for thesemeasurements. VSECICS is an online transaction processing workload and ischaracterized by light I/O. See Appendix A, “Workloads” on page 157 for adetailed description of the workload. All DASD are dedicated to the VSE V=Rguest for these measurements (except for the VM system DASD volumes). AllV=V guest measurements use full pack minidisk DASD and minidisk caching.
Figure 8. VSE guest migration from VM/ESA 1.2.2. VSECICS workload on a singleVSE/ESA guest of VM/ESA 1.2.2 and VM/ESA 2.1.0 on the 9121-320 processor.
Comparing VM/ESA 1.2.2 to VM/ESA 2.1.0, internal throughput rates wereequivalent within measurement variability.
When comparing guest ITR to VSE running native, the V=R guest, with thisworkload, achieved about 94% of native. The V=V guest achievedapproximately 84% of native mode performance running with or without theNPDS option.
64 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Workload: VSECICS
Hardware Configuration
Processor models: 9121-3206
StorageReal: 256MBExpanded: 0MB
DASD:
Software Configuration
VSE version: 2.1.0 (using the Standard Dispatcher)
Virtual Machines:
Measurement Discussion: The VSECICS workload was used to compare guestenvironments of VM/ESA 1.2.2 and VM/ESA 2.1.0 as well as VSE/ESA runningnative. For these measurement comparisons, the number of terminals wasadjusted so that when running as a guest of VM/ESA 1.2.2, the CPU utilizationwas near 90%. Then, the same number of terminals was run in the same guestmode under VM/ESA 2.1.0.
The VSE guest measurement results for the VSECICS workload are provided inthe following tables. The VSE native results are provided in Table 17 onpage 69.
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK VSAM VSE Sys. VM Sys.
3380-A 3880-03 2 23380-K 3990-03 4 163390-2 3990-02 4 20 2
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
VSEVR 1 VSE V=R 96MB/ESA 100 IOASSIST ONCCWTRANS OFF
orCICVV 1 VSE V=V 96MB/ESA 100 IOASSIST OFF
orCICVV 1 VSE V=V NPDS 300MB/ESA 100 IOASSIST OFF
SMART 1 RTM 16MB/370 100WRITER 1 CP monitor 2MB/XA 100
6 See “Hardware” on page 21 for an explanation of how this processor modelwas defined.
Migration from VM/ESA 1.2.2 65
Migration: VSE/ESA Guest
Table 14. VSE/ESA V = R guest migration from VM/ESA 1.2.2 on 9121-320, VSECICS.
VM/ESA ReleaseRun ID
1.2.2L1R78C90
2.1.0L1R88C90 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVM SizeGuest SettingVSE SupervisorProcessors
256M0M
096096M
V = RESA
1
256M0M
096096M
V = RESA
1
Response TimeAVG RESP (C) 0.226 0.226 0.000 -0.10%
ThroughputETR (C)ITR (H)ITRR (H)
65.7973.291.000
65.7672.930.995
-0.02-0.36
-0.005
-0.03%-0.49%-0.49%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
13.64413.681
0.4160.456
13.22813.225
13.71213.685
0.4290.304
13.28313.381
0.0680.0050.013
-0.1520.0550.157
0.50%0.03%3.10%
-33.31%0.41%1.18%
Processor Util.TOTAL (H)TOTALTOTAL EMUL (H)TOTAL EMULTVR(H)TVR
89.7690.0087.0287.00
1.031.03
90.1790.0087.3588.00
1.031.02
0.420.000.331.000.00
-0.01
0.46%0.00%0.38%1.15%0.08%
-1.14%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2584K200K
3862680
0.571080
2768K200K
3880086
0.531052
184K0K
1746
-0.04-28
7.12%0.00%0.45%7.50%
-6.47%-2.59%
PagingPAGE/CMDXSTOR/CMD
0.0000.000
0.0000.000
0.0000.000
nana
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
0.0000.0002.0000.030
7.70.0
0.030.030.010.40
0.0000.0002.0000.030
8.20.0
0.030.030.010.30
0.0000.0000.0000.000
0.50.0
000
-0.10
nananana
6.10%na
0.00%0.00%0.00%
-25.00%
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
0.0080.6944.4232.2560.882
0.0080.6944.4552.2720.897
0.0000.0000.0320.0160.016
-1.26%0.03%0.72%0.72%1.76%
Note: V=VMPRF, H=Hardware Mon i to r , C=CICSPARS, Unmarked=RTM
66 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Table 15. VSE/ESA V = V guest migration from VM/ESA 1.2.2 on 9121-320, VSECICS.
VM/ESA ReleaseRun ID
1.2.2L1V78C90
2.1.0L1V88C90 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVM SizeGuset SettingVSE SupervisorProcessors
160M0M
088096M
V = VESA
1
160M0M
088096M
V = VESA
1
Response TimeAVG RESP (C) 0.239 0.237 -0.002 -0.79%
ThroughputETR (C)ITR (H)ITRR (H)
60.2065.581.000
60.2165.420.998
0.01-0.16
-0.002
0.02%-0.25%-0.25%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
15.24915.283
1.7991.661
13.45113.622
15.28715.281
1.8281.661
13.45913.620
0.037-0.0030.0290.0000.008
-0.002
0.25%-0.02%1.64%
-0.02%0.06%
-0.02%
Processor Util.TOTAL (H)TOTALTOTAL EMUL (H)TOTAL EMULTVR(H)TVR
91.8092.0080.9782.00
1.131.12
92.0492.0081.0382.00
1.141.12
0.240.000.060.000.000.00
0.26%0.00%0.08%0.00%0.19%0.00%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2584K200K
38768103
0.6736
2768K200K
38718107
0.6436
184K0K-50
4-0.03
0
7.12%0.00%
-0.13%3.88%
-5.13%0.00%
PagingPAGE/CMDXSTOR/CMD
0.0170.000
0.0000.000
-0.0170.000
-100.00%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
200.0003.322
130.0002.160113.1
0.0866770
0.76
199.0003.305
127.0002.109113.2
0.0837073
0.82
-1.000-0.017-3.000-0.050
0.10.0-333
0.06
-0.50%-0.52%-2.31%-2.32%0.13%
na-3.49%4.48%4.29%7.89%
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
3.3260.782
18.17414.176
4.950
3.3160.782
18.18714.186
4.684
-0.010-0.0010.0140.011
-0.267
-0.29%-0.09%0.07%0.07%
-5.38%
Note: V=VMPRF, H=Hardware Mon i to r , C=CICSPARS, Unmarked=RTM
Migration from VM/ESA 1.2.2 67
Migration: VSE/ESA Guest
Table 16. VSE/ESA V = V NPDS guest migration from VM/ESA 1.2.2 on 9121-320,VSECICS.
VM/ESA ReleaseRun ID
1.2.2L1O78C90
2.1.0L1O88C90 Difference %Difference
EnvironmentReal StorageExp. StorageUsersVM SizeGuset SettingVSE SupervisorProcessors
160M0M
0880300MV = VESA
1
160M0M
0880300MV = VESA
1
Response TimeAVG RESP (C) 0.240 0.241 0.001 0.37%
ThroughputETR (C)ITR (H)ITRR (H)
60.0765.541.000
60.1065.320.997
0.03-0.22
-0.003
0.05%-0.33%-0.33%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
15.25815.316
1.8051.665
13.45213.651
15.30915.309
1.8201.664
13.48913.645
0.051-0.0070.014
-0.0010.036
-0.006
0.33%-0.05%0.80%
-0.05%0.27%
-0.05%
Processor Util.TOTAL (H)TOTALTOTAL EMUL (H)TOTAL EMULTVR(H)TVR
91.6592.0080.8182.00
1.131.12
92.0092.0081.0682.00
1.131.12
0.350.000.260.000.000.00
0.38%0.00%0.32%0.00%0.06%0.00%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2584K200K
38763104
0.6636
2768K200K
38724103
0.6636
184K0K-39
-1-0.01
0
7.12%0.00%
-0.10%-0.96%-0.86%0.00%
PagingPAGE/CMDXSTOR/CMD
0.0170.000
0.0170.000
0.0000.000
-0.05%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)MDC REAL SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
199.0003.313
130.0002.164113.0
876770
0.75
199.0003.311
127.0002.113113.1
836972
0.81
0.000-0.002-3.000-0.051
0.1-422
0.06
0.00%-0.05%-2.31%-2.35%0.12%
-4.60%2.99%2.86%8.00%
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
3.3230.783
18.17914.180
4.978
3.3230.784
18.17114.173
4.709
0.0000.001
-0.008-0.007-0.269
0.01%0.06%
-0.05%-0.05%-5.39%
Note: V=VMPRF, H=Hardware Mon i to r , C=CICSPARS, Unmarked=RTM
68 VM/ESA 2.1.0 Performance Report
Migration: VSE/ESA Guest
Table 17. VSE/ESA 2.1.0 native results.
VSE/ESA ReleaseWorkloadProcessorRun ID
2.1.0DYNAPACE
9121-320L1N_8PF0
2.1.0DYNAPACE
9121-480L2N_8PF0
2.1.0VSECICS9121-320
L1N_8C90
EnvironmentReal StorageUsersProcessors
96Mna
1
96Mna
2
96M1040
1
Response TimeAVG RESP (C) na na 0.230
ThroughputETR (C)ITR (H)
7.6318.66
7.8831.18
71.2277.91
Proc. UsagePBT/CMD (H) 3.216 3.847 0.0128
Processor Util.TOTAL (H)UTIL/PROC (H)
40.8840.88
50.5225.26
91.4191.41
Note: H=Hardware Monitor, C=VSE console (DYNAPACE), C=CICSPARS (VSECICS)
Migration from VM/ESA 1.2.2 69
Migration: VMSES/E
VMSES/EVM Service Enhancements Staged/Extended (VMSES/E) in VM/ESA 2.1.0 includesa number of performance enhancements. Some of these improved executionperformance:
• VMFBLD was improved in a number of areas (such as the new CSLGENoption of NUC, the improved performance and storage handling of requisiteprocessing, and the minimization of multiple part handler calls, while stillrespecting requisite order).
• By adding the SPRODID option to VMFCOPY, the selection of files to becopied can be controlled.
Other VMSES/E 2.1.0 enhancements reduced the number of manual steps viaautomation:
• The new CNTRL option to override control file name in the PPF (forGENCPBLS, VMFEXUPD, VMFNLS, VMFxASM family of commands).
• The build list option CNTRL has been added to VMFBDGEN to overrideCNTRL in the PPF.
• To allow multiple products the ability to update the CP load list, theGENCPBLS command has been modified to process multiple xxxMDLATMACROs.
Three primary VMSES/E tools that help with the servicing of products weremeasured to quantify the effects of the new function and the performanceenhancements:
• VMFREC EXEC receives the raw materials from a service tape and placesthem into the raw materials database.
• VMFAPPLY EXEC defines new maintenance levels based on the contents ofthe raw materials database.
• VMFBLD EXEC uses the defined maintenance levels to select the correctlevel and build the running product.
The biggest performance impact of all the new enhancements found in VMSES/E2.1.0 came from the VMFBLD improvements. The regression measurementsshowed significant savings in build response time.
The improvements in the build function result in a virtual storage reduction bystoring global dependencies only once. By removing duplicate objectprocessing, dependency and requisite processing is bypassed.7 The VMFE2Emodule has been restructured and optimized to perform both chained GETs andSETs for data and to use its buffers more efficiently. VMFSIMPC has beenmodified to directly return data back two levels rather than use VMFSIM as anintermediate step when returning stem data back to VMFBLD (and otherfunctions). Finally, VMFMSG has been improved by grouping its VMFE2E callsinto one invocation (which helps VMFBLD).
7 This portion of the VMFBLD improvements is also available on VM/ESA 1.2.2 as APAR VM57938.
70 VM/ESA 2.1.0 Performance Report
Migration: VMSES/E
Overall, for the dedicated, single-user measurements reported here, the processof receiving and applying CMS service, and building CMS on VMSES/E 2.1.0showed total elapsed time improved 6% when compared to VMSES/E 1.2.2.Virtual CPU time and Total CPU time each improved 13%. These measurementswere made on the 9121-480 configuration.
The following measurements are provided to demonstrate the performance ofthese changes on VMSES/E 2.1.0.
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage
Real: 256MB (default MDC)Expanded: 0MB
Tape: 3480 (one service tape for the receive command)
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Software Configuration
Measurement Discussion: All measurements were performed on a dedicated,first-level system with only one active user logged on (MAINT user ID). Theobjective of these measurements was to show that the new functionalenhancements to VMSES/E 2.1.0 did not degrade performance when compared toVMSES/E 1.2.2 in an established service environment — where all SoftwareInventory Management (SIM) tables had been previously initialized. The SIMtables were initialized by using the same Recommended Service Upgrade (RSU)tape with both releases of VMSES/E. The purpose of initializing SIM was toremove the one-time costs associated with setting up SIM.
Once SIM was initialized, a Corrective (COR) service tape containing CMSservice was loaded onto the system. The performance test system used forthese measurements was set up so that the COR tape would be compatible withboth VMSES/E 1.2.2 and VMSES/E 2.1.0; both releases worked on exactly thesame service and the same raw materials database.
The CMS service from the COR tape was received. VMFREC was used toreceive a total of 1728 CMS parts from seven tape files. Next, the apply function(VMFAPPLY) was used to process 206 PTFs. The build function (VMFBLD) with
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 6 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
MAINT 1 MAINT 30MB/XC
Migration from VM/ESA 1.2.2 71
Migration: VMSES/E
the STATUS option was invoked and identified 149 build requirements. Finally,15 build lists were processed after running the VMFBLD command with theSERVICED option.
The methodology described in this section applies to both VMSES/E 2.1.0 andVMSES/E 1.2.2. Performance data were collected before and after eachcommand execution to determine total response time and the total amount ofresources used by the execution of the command. The performance data weregenerated by the CP QUERY TIME command. No intermediate steps werenecessary that required human intervention (for example, entering data,pressing a function key, or mounting a tape). Hence, the performance datareported were derived from uninterrupted running of the command.
The following performance indicators were used and can be found in the tablesbelow:
Total Time (seconds): the total elapsed time for the command. This iscomputed by taking the difference between the start andstop time. More specifically, it is the time after the enterkey is pressed (the command had already been typed)until the ready message is received.
Total CPU (seconds): the difference in TOTCPU for the user before and afterrunning the command.
Virtual CPU (seconds): the difference in VIRTCPU for the user before and afterrunning the command.
Two performance factors were not included in the results: 1) the time taken toinvestigate the necessary steps to invoke the function and 2) the time tomanually error check the correctness of the information or the results. (Thesuccessful completion of each service command was checked after thecommand finished.)
Workload: Receive
Command: VMFREC PPF ESA CMSScenario Details: 1728 parts received from 7 tape files.
Table 18. VMFREC measurement data: migration from VMSES/E 1.2.2 on the9121-480
VMSES/E Release 1.2.2 2.1.0 Difference %Difference
Total Time (QT)Total CPU (QT)Virtual CPU (QT)
557187171
553182167
-4-5-4
-1%-2%-2%
Note: QT=CP QUERY TIME
72 VM/ESA 2.1.0 Performance Report
Migration: VMSES/E
Workload: Apply
Command: VMFAPPLY PPF ESA CMSScenario Details: 206 PTFs after receiving parts from COR tape.
Workload: Build with STATUS Option
Command: VMFBLD PPF ESA CMS (STATUSScenario Details: 149 build requirements identified.
Workload: Build with SERVICED Option
Command: VMFBLD PPF ESA CMS (SERVICEDScenario Details: 15 build lists processed; 149 objects built.
Table 19. VMFAPPLY measurement data: migration from VMSES/E 1.2.2 on the9121-480
VMSES/E Release 1.2.2 2.1.0 Difference %Difference
Total Time (QT)Total CPU (QT)Virtual CPU (QT)
359286279
349277271
-10-9-8
-3%-3%-3%
Note: QT=CP QUERY TIME
Table 20. VMFBLD STATUS measurement data: migration from VMSES/E 1.2.2 onthe 9121-480
VMSES/E Release 1.2.2 2.1.0 Difference %Difference
Total Time (QT)Total CPU (QT)Virtual CPU (QT)
135119118
1038887
-32-31-31
-24%-26%-26%
Note: QT=CP QUERY TIME
Table 21. VMFBLD SERVICED measurement data: migration from VMSES/E 1.2.2 onthe 9121-480
VMSES/E Release 1.2.2 2.1.0 Difference %Difference
Total Time (QT)Total CPU (QT)Virtual CPU (QT)
667311298
606239227
-61-72-71
-10%-23%-24%
Note: QT=CP QUERY TIME
Migration from VM/ESA 1.2.2 73
Migration from Other VM Releases
Migration from Other VM Releases
The performance results provided in this report apply to migration from VM/ESA1.2.2. This section discusses how to use the information in this report along withsimilar information from earlier reports to get an understanding of theperformance of migrating from earlier VM releases.
Note: In this section, VM/ESA releases prior to VM/ESA 2.1.0 are sometimesreferred to without the version number. For example, VM/ESA 2.2 refers toVM/ESA Version 1 Release 2.2.
Migration Performance Measurements MatrixThe matrix on the following page is provided as an index to all the performancemeasurements pertaining to VM migration that are available in the VM/ESAperformance reports. The numbers that appear in the matrix indicate whichreport includes migration results for that case:
10 VM/ESA Release 1.0 Performance Report
11 VM/ESA Release 1.1 Performance Report
20 VM/ESA Release 2.0 Performance Report
21 VM/ESA Release 2.1 Performance Report
22 VM/ESA Release 2.2 Performance Report
210 VM/ESA Version 2 Release 1.0 Performance Report (thisdocument)
See “Referenced Publications” on page 5 for more information on these reports.
Many of the comparisons listed in the matrix are for two consecutive VMreleases. For migrations that skip one or more VM releases, you can get ageneral idea how the migration will affect performance by studying theapplicable results for those two or more comparisons that, in combination, spanthose VM releases. For example, to get a general understanding of howmigrating from VM/ESA 1.2.1 to VM/ESA 2.1.0 will tend to affect VSE guestperformance, look at the VM/ESA 1.2.1 to VM/ESA 1.2.2 comparisonmeasurements and the VM/ESA 1.2.2 to VM/ESA 2.1.0 comparisonmeasurements. In each case, use the measurements from the systemconfiguration that best approximates your VM system. For more discussion onthe use of multiple comparisons, see page 79.
The comparisons listed for the CMS-intensive environment primarily consist ofminidisk-only measurements but there are some SFS comparisons as well.
Internal throughput rate ratio (ITRR) information for the minidisk-onlyCMS-intensive environment has been extracted from the CMS comparisonslisted in the matrix and is summarized in “Migration Summary: CMS-IntensiveEnvironment” on page 76.
74 Copyright IBM Corp. 1995
Migration from Other VM Releases
Table 22. Sources of VM migration performance measurement results
Source Target ProcessorReport Number
CMS OV/VMVSE
GuestMVS
Guest
VM/SP 5 VM/ESA 1.0 (370)VM/ESA 1.0 (370)VM/ESA 1.0 (370)VM/ESA 2.0VM/ESA 2.0
4381-139221-1709221-1209221-1709221-120
10
20
20
20202020
VM/SP 6 VM/ESA 1.0 (370) 4381-139370-809370-30
101010
VM/SP HPO5 VM/ESA 1.0 (ESA)VM/ESA 2.0VM/ESA 2.0
3090*-200J9121-4809121-320
102020
VM/ESA 1.0 (370) VM/ESA 1.5 (370)VM/ESA 1.1VM/ESA 2.0VM/ESA 2.0
9221-1209221-1709221-1709221-120
22112020
2020
VM/XA* 2.0 VM/ESA 1.0 (ESA) 3090-600J 10
VM/XA 2.1 VM/ESA 1.0 (ESA)VM/ESA 1.0 (ESA)VM/ESA 1.0 (ESA)VM/ESA 1.0 (ESA)VM/ESA 1.1VM/ESA 1.1
3090-600J3090-200J9021-7209121-3209021-7209121-320
1010
11
1111
11
10
VM/ESA 1.0 (ESA) VM/ESA 1.1 3090-600J9021-7209021-5809121-4809121-3209221-170
1111111111
11
11
11
VM/ESA 1.1 VM/ESA 2.0 9021-9009021-7209121-4809121-3209221-170
20
20
20
2020
20
20
VM/ESA 2.0 VM/ESA 2.1 9121-7429121-4809121-3209221-170
2121
21
2121
21
VM/ESA 2.1 VM/ESA 2.2 9121-7429121-4809121-3209221-170
2222
2222
VM/ESA 2.2 VM/ESA 2.1.0 9121-7429121-4809121-3209221-170
210210
210
210210
Migration from Other VM Releases 75
Migration from Other VM Releases
Migration Summary: CMS-Intensive EnvironmentA large body of performance information for the CMS-intensive environment hasbeen collected over the last several releases of VM. This section summarizesthe internal throughput rate (ITR) data from those measurements to show, forCMS-intensive workloads, the approximate changes in processing capacity thatmay occur when migrating from one VM release to another. As such, thissection can serve as one source of migration planning information.
The performance relationships shown here are limited to the minidisk-onlyCMS-intensive environment. Other types of VM usage may show differentrelationships. Furthermore, any one measure such as ITR cannot provide acomplete picture of the performance differences between VM releases. The VMperformance reports from which the ITR ratios (ITRRs) were extracted can serveas a good source of additional performance information. Those reports arelisted on page 74.
Table 23 summarizes the ITR relationships that were observed for theCMS-intensive environment for a number of VM release-to-release transitions:
Table 23. Approximate VM relative capacity: CMS-intensive environment
Source Target Case ITRR ITRR Derivation Notes
VM/SP 5 VM/ESA 1.5 (370)VM/ESA 2.1.0 9221-120
0.940.90
R5*R13cR5*R13a*R2*R21*R22
1,5,71,2,4,6-8
VM/SP 6 VM/ESA 1.5 (370)VM/ESA 2.1.0 9221-120
1.091.05
R6*R13cR6*R13a*R2*R21*R22
52,4,6-8
VM/ESA 1.0 (370) VM/ESA 1.5 (370)VM/ESA 2.1.0 9221-120
9221-170
1.020.981.05
R13cR13a*R2*R21*R22R13b*R11*R2*R21*R22
2,6-84-8
VM/ESA 1.5 (370) VM/ESA 2.1.0 9221-120
9221-170
0.96
1.03
(1/R13c)*R13a*R2*R21*R22(1/R13c)*R13b*R11*R2*R21*R22
2,6-8
4-8
VM/SP HPO 5 VM/ESA 2.1.0 UP, ¬ 4 3 8 1MP, ¬ 4 3 8 1
0.991.10
RHa*R2*R21*R22RHb*R1E*R11*R2*R21*R22
4,5,7,83-5,7,8
VM/XA 2.0 VM/ESA 2.1.0 1.22 RX20*RX21*R1E*R11*R2*R21*R22
8
VM/XA 2.1 VM/ESA 2.1.0 1.19 RX21*R1E*R11*R2*R21*R22
8
VM/ESA 1.0 ESA VM/ESA 2.1.0 1.15 R1E*R11*R2*R21*R22 8
VM/ESA 1.1 VM/ESA 2.1.0 1.10 R11*R2*R21*R22 8
VM/ESA 2 VM/ESA 2.1.0 1.09 R2*R21*R22 8
VM/ESA 2.1 VM/ESA 2.1.0 1.08 R21*R22 8
VM/ESA 2.2 VM/ESA 2.1.0 1.05 R22 8
Explanation of columns:
Case The set of conditions for which the stated ITRR approximatelyapplies. When not specified, no large variations in ITRR werefound among the cases that were measured. However, there isstill some variability. These ITRR variations are shown in“Derivation and Supporting Data” on page 79.
76 VM/ESA 2.1.0 Performance Report
Migration from Other VM Releases
ITRR The target ITR divided by the source ITR. A number greaterthan 1.00 indicates an improvement in processor capacity.
ITRR Derivation Shows how the ITRR was derived. See “Derivation andSupporting Data” on page 79 for discussion.
Notes:
1. The VM/SP 5 system is assumed to include APAR VM30315, the performanceSPE that adds segment protection and 4KB key support. Othermeasurements have shown that VM/SP 5 ITR is 4% to 6% lower without thisAPAR.
2. This includes an increase of central storage from 16MB to 32MB tocompensate for VM/ESA ′ s larger storage requirements. The VM/ESA casealso includes 16MB of expanded storage for minidisk caching.
3. The VM/SP HPO 5 to VM/ESA 1.0.0 (ESA Feature) portion of the derivationwas done with a reduced think time to avoid a 16MB-line real storageconstraint in the HPO case. In cases where the base HPO system is16MB-line constrained, migration to VM/ESA will yield additionalperformance benefits by eliminating this constraint.
4. These estimates do not apply to 4381 processors. The ESA-capable 4381models provide less processing capacity when run in ESA mode ascompared to 370 mode. Therefore, expect a less favorable ITR ratio thanshown here when migrating on a 4381 processor from VM/SP, VM/SP HPO,or VM/ESA (370) to VM/ESA 2.1.0.
5. The target VM system supports a larger real memory size than the statedmigration source and this potential benefit is not reflected in the stated ITRratios. Migrations from memory-constrained environments will yieldadditional ITRR and other performance benefits when the target configurationhas additional real storage.
A VM/SP example: The stated VM/SP 5 to VM/ESA 1.1.5 (370 Feature) ITRRis based (in part) on a comparison of VM/SP 5 to VM/ESA 1.0.0 (370 Feature),which showed an ITRR of 0.92. This comparison was done with 16MB of realmemory. However, VM/ESA 1.0.0 (370 Feature) supports up to 64MB of realmemory (but subject to the 16MB-line constraint). When VM/SP 5 with 16MBwas compared to VM/ESA 1.0.0 (370 Feature) with 32MB, an ITRR of 0.98 wasobserved. See “CMS-Intensive Migration from VM/SP Release 5” in theVM/ESA Release 2 Performance Report for details.
A VM/SP HPO example: The stated VM/SP HPO 5 to VM/ESA 2.1.0 ITRR foruniprocessors is based (in part) on a VM/SP HPO 5 to VM/ESA 2 comparison,which showed an ITRR of 0.91. Those measurements were done on a9121-320 system with its 256MB of storage configured as 64MB of realstorage and 192MB of expanded storage (64MB/192MB). The 9121-320 had tobe configured that way because 64MB is the maximum real storagesupported by HPO. When VM/SP HPO Release 5.0 (64MB/192MB) wascompared to VM/ESA 2 (192MB/64MB), an ITRR of 0.95 was observed. See“CMS-Intensive Migration from VM/SP HPO Release 5” in the VM/ESARelease 2 Performance Report for details.
Migration from Other VM Releases 77
Migration from Other VM Releases
6. These results apply to the case where the following recommended tuning isdone for the target system:
• Use minidisk caching.
• On VM/ESA systems before VM/ESA Release 2, set DSPSLICE to threetimes the default. Otherwise, use the default value.
• For the 9221-120, set the VTAM DELAY operand in the VTAM CTCAchannel-attachment major node to 0.3 seconds. For the 9221-170, set theVTAM delay to 0.2 seconds.
• Set IPOLL ON for VTAM.
• Preload the key shared segments.
See section “CMS-Intensive Migration from VM/ESA 1.1,” subsection“9221-170 / Minidisk” in the VM/ESA Release 2 Performance Report for moreinformation on these tuning items. The purpose of this tuning is to configureVM/ESA for use on ESA-mode 9221 processors. If this tuning is not done,lower ITR ratios will be experienced. For example, for the FS7B0RCMS-intensive workload, going from VM/ESA 1.0.0 (370 Feature) to VM/ESA1.1 resulted in an ITRR of 0.95 with the above tuning and an ITRR of 0.86without it. This comparison is shown in the VM/ESA Release 1.1 PerformanceReport.
7. There has been growth in CMS real storage requirements on a per userbasis. This growth is reflected in the ITR ratios to only a limited extent andshould therefore be taken into consideration separately. The most significantgrowth took place in VM/SP 6 and in VM/ESA 2.0. The VM/SP 6 increase canaffect the performance of migrations from VM/SP 5 and VM/SP HPO 5. TheVM/ESA 2.0 growth can affect the performance of migrations from VMreleases prior to VM/ESA 2.0. Storage constrained environments with largenumbers of CMS users will be the most affected.
8. This ITRR value depends strongly upon the fact that CMS is now shippedwith most of its REXX execs and XEDIT macros compiled (see “PerformanceImprovements” on page 9). If these are already compiled on your system,divide the ITRR shown by 1.07.
Table 23 on page 76 only shows performance in terms of ITR ratios (processorcapacity). It does not provide, for example, any response time information. Animproved ITR tends to result in better response times and vice versa. However,exceptions occur. An especially noteworthy exception is the migration from370-based VM releases to VM/ESA. In such migrations, response times havefrequently been observed to improve significantly, even in the face of an ITRdecrease. One pair of measurements, for example, showed a 30% improvementin response time, even though ITR decreased by 5%. When this occurs, factorssuch as XA I/O architecture and minidisk caching outweigh the adverse effects ofincreased processor usage. These factors have a positive effect on responsetime because they reduce I/O wait time, which is often the largest component ofsystem response time.
Keep in mind that in an actual migration to a new VM release, other factors(such as hardware, licensed product release levels, and workload) are oftenchanged in the same time frame. It is not unusual for the performance effects
78 VM/ESA 2.1.0 Performance Report
Migration from Other VM Releases
from upgrading VM to be outweighed by the performance effects from theseadditional changes.
These VM ITRR estimates can be used in conjunction with the appropriatehardware ITRR figures to estimate the overall performance change that wouldresult from migrating both hardware and VM. For example, suppose that thenew processor ′ s ITR is 1.30 times that of the current system and suppose thatthe migration also includes an upgrade from VM/ESA 2.1 to VM/ESA 2.1.0. FromTable 23 on page 76, the estimated ITRR for migrating from VM/ESA 2.1 toVM/ESA 2.1.0 is 1.08. Therefore, the estimated overall increase in systemcapacity is 1.30*1.08 = 1.40.
Table 23 on page 76 represents CMS-intensive performance for the case whereall files are on minidisks. The release-to-release ITR ratios for shared filesystem (SFS) usage are very similar to the ones shown here. SFSrelease-to-release measurement results are provided in the reports listed onpage 74.
Derivation and Supporting Data
This section explains how the ITR ratios shown above were derived.
The derivation column in Table 23 on page 76 shows how the stated ITR ratiowas calculated. For example, the ITRR of 1.08 for migrating from VM/ESA 2.1 toVM/ESA 2.1.0 was calculated by multiplying the average ITRR for migrating fromVM/ESA 2.1 to VM/ESA 2.2 (R21) by the average ITRR for migrating from VM/ESA2.2 to VM/ESA 2.1.0 (R22): 1.03*1.05 = 1.08. R21 was calculated by averagingthe ITRRs for VM measurement pairs 24 through 27 (see Table 24 on page 80).Likewise, R22 was calculated by averaging the ITRRs for VM measurement pairs28 through 30.
For the case where the source system level is VM/ESA 1.5 (370), the term“1/R13c” resolves to “1/1.02.” This takes into account the fact that VM/ESA 1.5(370) has a somewhat higher ITR than VM/ESA 1.0 (370). This makes the ITRRsmaller when migrating to VM/ESA 2.1.0 from VM/ESA 1.5 (370) as compared tomigrating from VM/ESA 1.0 (370).
Except where noted, any given measurement pair represents two measurementswhere the only difference is the VM release. As such, all the performanceresults obtained for one of the measurements in the pair can validly becompared to the corresponding results for the other measurement.
By contrast, there are often substantial environmental differences betweenunpaired measurements. Factors such as number of users, workload, processormodel, and I/O configuration will often be different. This greatly limits the kindsof valid inferences that can be drawn when trying to compare data across two ormore measurement pairs. For example, response times are very sensitive to anumber of specific environmental factors and therefore should only be comparedwithin a set of controlled, comparable measurements.
For this reason, Table 23 on page 76 only covers ITR ratios. Experience hasshown that ITR ratios are fairly resistant to changes in the measurement
Migration from Other VM Releases 79
Migration from Other VM Releases
environment. Consequently, combining the ITR ratios observed for individualrelease transitions (as explained above) provides a reasonably good estimate ofthe ITR ratio that would result for a migration that spans all those releases.
The ITR ratios shown in Table 23 on page 76 are based on the following pairs ofmeasurements:
Table 24 (Page 1 of 2). Derivation and supporting data: VM measurement pairs
PairNumber
SourceRun ID
TargetRun ID Processor Memory
Proc.Util.
BasePg/cmd
ITRRatio Symbol
VM/SP 5 to VM/ESA 1.0 (370 Feature): FS7B0R Workload; Report 201 H1SR0091 H17R0090 9221-120 16MB 80 9 0.92 (R5)
VM/SP 6 to VM/ESA 1.0 (370 Feature): FS7B0; Report 1023
avg
EC4295EC4295
EC7603EC7603
4381-134381-13
16MB16MB
7080
1520
1.0691.0751.07 (R6)
VM/ESA 1.0 (370 Feature) to VM/ESA 2, 9221-120: FS7B0R; Report 204 H17R0090 H15R0091 9221-120 16MB, 32MB 80 11 0.90 (R13a)
VM/ESA 1.0 (370 Feature) to VM/ESA 1.1, 9221-170: FS7B0R; Report 115 H17R0281 H14R0287 9221-170 64MB 80 7 0.95 (R13b)
VM/ESA 1.0 (370 Feature) to VM/ESA 1.5 (370 Feature: FS7F0; Report 2267
avg
H17E0106H17E0108
H17E0113H17E0113
16MB16MB
9090
1010
0.9851.0321.02 (R13c)
VM/SP HPO 5 to VM/ESA 2: FS7B0R; Report 208 L1HR1033 L15R0951 9121-320 64MB/192MB 90 17 0.91 (RHa)
VM/SP HPO 5 to VM/ESA 1.0 (ESA Feature): FS7B0R; Report 109 Y25R1141 Y23R1143 3090-200J 64MB/512MB 90 22 0.97 (RHb)
VM/XA 2.0 to VM/XA 2.1: FS7B0R; Report 1010 Y62R5401 Y6$R5401 3090-600J 512MB/2GB 90 15 1.02 (RX20)
VM/XA 2.1 to VM/ESA 1.0 (ESA Feature): FS7B0R; Report 101112
avg
Y2$R2001Y6$R5401
Y23R2001Y63R5405
3090-200J3090-600J
256MB/2GB512MB/2GB
9090
1112
1.0641.0291.04 (RX21)
VM/ESA 1.0 (ESA Feature) to VM/ESA 1.1: FS7B0R; Report 1113141516
avg
Y63R5866L23R1770L13R0911H13R0280
Y64R5865L24R1770L14R0910H14R0287
9021-7209121-4809121-3209221-170
512MB/2GB192MB/64MB192MB/64MB48M/16MB
90909080
13131211
1.0591.0321.0451.0431.04 (R1E)
VM/ESA 1.1 to VM/ESA 2: FS7B0R; Report 2017181920
avg
264RB424L24R1876L24R1821H14R0292
265RB426L25R187FL25R1823H15R0294
9021-9009121-4809121-4809221-170
1GB/4GB192MB/64MB128MB/0MB48MB/16MB
90909090
16141512
1.0181.0051.0091.0091.01 (R11)
VM/ESA 2 to VM/ESA 2.1: FS7F0R; Report 21212223
avg
S45E5400S45E5201H15E0290
S46E5400S46E5200H16E0290
9121-7429121-7429221-170
1GB/1GB320MB/64MB48MB/16MB
909090
171915
1.0121.0111.0161.01 (R2)
80 VM/ESA 2.1.0 Performance Report
Migration from Other VM Releases
Table 24 (Page 2 of 2). Derivation and supporting data: VM measurement pairs
PairNumber
SourceRun ID
TargetRun ID Processor Memory
Proc.Util.
BasePg/cmd
ITRRatio Symbol
VM/ESA 2.1 to VM/ESA 2.2: FS8F0R; Report 2224252627
avg
S46E5505S46E5202L26E186IH16E0302
S47E550AS47E5201L27E186JH17E0303
9121-7429121-7429121-4809221-170
1GB/1GB320MB/64MB8
224MB/32MB8
48MB/16MB8
90909090
17201615
1.0261.0371.0261.0261.03 (R21)
VM/ESA 2.2 to VM/ESA 2.1.0: FS8F0R; Report 210282930
avg
S47E550DL27E1909H17E0304
S48E5500L28E190MH18E0303
9121-7429121-4809221-170
1GB/1GB256MB64MB
909090
181615
1.0421.0701.0381.05 (R22)
Note: The report numbers refer to the list of VM performance reports on page 74.
Explanation of columns:
Memory The amount of real storage and (when applicable) expandedstorage in the measured configuration.
Proc. Util. Approximate processor util ization. The number of users isadjusted so that the source case runs at or near the statedutilization. The target case is then run with the same number ofusers.
Base Pg/cmd The average number of paging operations per commandmeasured for the source case. This value gives an indication ofhow real-memory-constrained the environment is. Forconfigurations with expanded storage used for paging, this valueincludes expanded storage PGIN and PGOUT operations inaddition to DASD page reads and writes.
Symbol The symbol used to represent this release transition in Table 23on page 76.
The FS7B0R, FS7F0R, or FS8F0R workloads (CMS-intensive, minidisks, remoteusers simulated by TPNS) were used for all comparisons except those involvingVM/SP 6. For those comparisons, the FS7B0 workload was used (CMS-intensive,minidisks, local users simulated by the full screen internal driver (FSID) tool).
The results in this table illustrate that the release-to-release ITR ratios can anddo vary to some extent from one measured environment to another.
8 These are the storage sizes used for the VM/ESA 1.2.1 measurements. For VM/ESA 1.2.2, the total storage size was the samebut all of the expanded storage was reconfigured as real storage. This conforms to the usage guidelines for enhancedminidisk caching.
Migration from Other VM Releases 81
New Functions
New Functions
A number of the functional enhancements in VM/ESA 2.1.0 have performanceimplications. This section contains performance evaluation results for thefollowing functions:
• POSIX
• DCE
• GCS TSLICE Option
82 Copyright IBM Corp. 1995
POSIX
POSIXThis section provides performance information and measurement results for thePOSIX support provided by VM/ESA 2.1.0. The following topics are covered:
POSIX InitializationPOSIX Functions: CPU UsagePOSIX Functions: Real Storage RequirementsShell Initialization and TerminationShell Commands: CPU UsageShell Commands: Real Storage RequirementsLarge File PerformanceByte File System Loading
POSIX InitializationThe POSIX environment is implicitly initialized in a virtual machine whenever thefirst POSIX-oriented request is issued. This request might be, for example, anOPENVM MOUNT request. Once the POSIX environment is initialized, it remainsuntil the virtual machine is reset (IPL CMS).
The amount of time required to do POSIX initialization is relatively small. On anunconstrained 9021-900, the elapsed time was observed to be about 0.1 secondsand about 16 milliseconds of CPU time was required.
Perhaps the main performance implication of POSIX initialization is that about640 additional non-shared pages are referenced. If there is no subsequent useof POSIX functions, these pages will typically be paged out to DASD. If there issubsequent use of POSIX functions, a subset of these pages will continue to bereferenced and additional pages may be referenced as well (see “POSIXFunctions: Real Storage Requirements” on page 86).
Once the POSIX environment has been initialized, there is a slight increase inresource requirements for subsequent execution of non-POSIX CMS work. Forexample, an instruction trace of an EXEC that copies a small file, XEDITs it, andthen erases that file showed a 0.25% increase in virtual machine instructionsexecuted and a 5 page increase in referenced non-shared pages when it wasexecuted after POSIX initialization.
Because of these various additional resource requirements, we recommend thatyou not put POSIX-oriented commands such as OPENVM MOUNT in yourPROFILE EXEC unless you will normally be using POSIX functions subsequent tostarting CMS.
New Functions 83
POSIX
POSIX Functions: CPU UsageCPU usage information was obtained for a selection of frequently used POSIXfunctions. Elapsed time, byte file system (BFS) server calls, and BFS DASD I/Oswere also collected.
The data reported for each function represent an average of multiple (typically50) loop iterations of that function. The functions were executed by a C program,while data collection was controlled by assembler routines called by thatprogram. Elapsed times were obtained using the STCK instruction. Usermachine CPU times were obtained using diagnose code X ′ 0C′ . Byte file systemserver CPU times were obtained using the CP QUERY TIME command. BFSserver statistics were collected using the QUERY FILEPOOL COUNTERcommand. All this information was collected immediately prior to andimmediately following execution of each function loop.
The measurements were made on a non-dedicated 9021-900 during low usageconditions when contention from other system activity was minimal. The BFSserver was dedicated during the measurement. Multiple measurement runswere obtained to verify repeatability. The results, shown in Table 25, are from atypical measurement run.
Table 25 (Page 1 of 2). Performance of individual POSIX calls on a 9021-900processor
Function
ElapsedTime
( µsec)
TotalCPU Time
( µsec)
UserCPU TIme
( µsec)
ServerCPU Time
( µsec)BFS
CallsBFSI/Os
chmod()close()closedir()creat()
769915061470
10777
3476157416005114
1676974
10002114
1800600600
3000
2112
2002
fcntl()fstat()fsync()ftruncate()
69116
17208102
68112
17662416
68112966
1016
00
8001400
0011
0002
getcwd()getlogin()getpid()getppid()
153812
44
179012
44
99012
44
800000
1000
0000
link()lseek()mkdir()mkfifo()
1122266
1173914124
686264
64746790
306264
28742990
38000
36003800
3033
2022
open()opendir()pipe()read()
104741785
465361
48581762
462152
20581162
462152
2800600
00
2100
2000
readdir()readlink()rename()rmdir()
324233
129419065
236232
80864404
36232
40861804
2000
40002600
0.02042
0022
sigprocmask()stat()symlink()time()
7251
126772
7248
67702
7248
29702
00
38000
0030
0020
84 VM/ESA 2.1.0 Performance Report
POSIX
Notes:
1. All t imes are in microseconds.
2. The measurement accuracy of server CPU time is l imited by the accuracy ofthe QUERY TIME command (hundredths of a second).
3. For getpid(), getppid(), getlogon(), time(), times(), and sigprocmask(), 1000loop iterations were used. Fifty loop iterations were used for all othermeasured functions.
4. All files, directories, links, etc, were created in the root directory.
5. The performance of a given function can vary depending upon how it isinvoked and other conditions. The following list provides qualifyinginformation for the measured cases:
chmod() grant all permissions to a fileclose() the file is emptycreat() file does not existfcntl() obtain file status and file access mode flagsfsync() no modified data to be forced outftruncate() zero-length file truncated to 0 bytesgetcwd() root directorylseek() position to beginning of a zero-length fileopen() file does not existread() 100 bytes, successive reads are sequentialsigprocmask() SIG_BLOCKstat() file exists and was recently referencedwrite() 100 bytes, successive writes are sequential
The results indicate that POSIX functions that do not need to make any trips tothe BFS server require much less CPU time than those that do. Also, CPU timeis roughly proportional to the number of BFS server calls that are required.
For the cases where the measured function required one or more trips to theBFS server but there were no server I/Os, the elapsed time is often somewhatlower than total CPU time. This is because there is a slight amount of overlapbetween processor usage in the user machine and processor usage in the BFSserver machine (the measurements were obtained on a 6-way processor).
The counts and timings provided by the QUERY FILEPOOL COUNTER commandare helpful for understanding BFS server activity. Table 26 shows samplecounter data for one of the measured POSIX functions.
Table 25 (Page 2 of 2). Performance of individual POSIX calls on a 9021-900processor
Function
ElapsedTime
( µsec)
TotalCPU Time
( µsec)
UserCPU TIme
( µsec)
ServerCPU Time
( µsec)BFS
CallsBFSI/Os
t imes()unlink()utime()write()
19710104
7226363
3856242712
152
3826241312
152
030001400
0
0310
0220
New Functions 85
POSIX
These data were obtained by issuing the QUERY FILEPOOL COUNTER commandimmediately before and after executing the open() function loop, subtracting theresults, and dividing by 50 (the number of loop iterations).
The results show that two BFS requests are required to implement the open.(The purpose of the lookup request is to determine whether the file alreadyexists.) You can also see that the open resulted in two I/O requests (a forcedwrite to each of the two logs), those I/O requests took a total of 6.08 msec tocomplete, and that this accounts for most of the 8.04 msec it took for the two BFSrequests to be handled by the BFS server (File Pool Request Service Time).
Table 26. Sample Q FILEPOOL COUNTER data: open()
Count/Open Counter Description
1.001.000.022.002.028.042.02
19.141.002.028.702.402.402.002.006.082.002.00
Byte File Lookup RequestsByte File Open File New With Intent Write RequestsQuery File Pool RequestsTotal Byte File File Pool RequestsTotal File Pool RequestsFile Pool Request Service Time (msec)Local File Pool RequestsSAC CallsLUW RollbacksBegin LUWsAgent Holding Time (msec)Log Blocks WrittenTotal DASD Block TransfersBIO Requests to Write Log BlocksTotal BIO RequestsTotal BIO Request Time (msec)I/O Requests to Write Log BlocksTotal I/O Requests
POSIX Functions: Real Storage RequirementsInstruction traces of the user virtual machine were collected for a subset of thePOSIX functions listed in Table 25. This information was used to determine thenumber of unique pages that were referenced during the execution of eachtraced function and during the combined execution of all the traced functions.Each referenced page was classified as non-shared or shared based uponwhether there is one such page per CMS user or whether there is just oneinstance of that page that is shared among all CMS users. The results aresummarized in Table 27.
Table 27. Real storage requirements of selected POSIX calls
Function
UniqueNon-Shared
Pages
UniqueShared
Pages
close()fsync()getpid()open()read()sigprocmask()write()
77341888402030
8123
39230
430
all 130 106
86 VM/ESA 2.1.0 Performance Report
POSIX
The count of non-shared pages is more important than the count of sharedpages because each user doing that function requires a separate set of thesepages.
All of the shared pages are from the CMS saved system.
The unique page reference counts listed in the table are for the user virtualmachine. Those POSIX functions that require calls to the byte file system serverwill also cause pages in the BFS server to be referenced. Those pagereferences, however, are less important because there is just one set of thosepages in a server virtual machine that is (potentially) servicing many end users.
POSIX functions that use the byte file system tend to touch more non-sharedpages than do analogous shared file system calls. An example CMS commandthat uses open, close, read, and write SFS calls was found to reference 40non-shared pages.
Shell Initialization and TerminationInvocation of the OPENVM SHELL command causes the POSIX shell to beinitialized. You remain in the shell environment until you specify ″exit″.
Shell initialization references about 1030 pages in the user virtual machine (inaddition to the 640 pages that are referenced during POSIX initialization). Asubset of these pages continue to be referenced when shell commands arebeing executed (see “Shell Commands: Real Storage Requirements” onpage 89). About 980 of these pages are released on exit from the shellenvironment. These figures were determined through use of the CP INDICATEUSER * EXPANDED command. The sum of the user ′ s resident, expandedstorage, and DASD pages was used as a measure of how many unique pageswere referenced.
Total CPU time required to initialize and exit the shell was measured using theCP QUERY TIME command. On a 9021-900, shell initialization CPU time wasabout 0.51 seconds, while shell termination CPU time was about 0.03 seconds.
New Functions 87
POSIX
Shell Commands: CPU UsageCPU usage information was obtained for a selection of frequently used POSIXshell commands. Elapsed time, BFS server calls, and BFS DASD I/Os were alsocollected.
The measured commands were collected into a shell script. Each measuredcommand was immediately preceded and immediately followed by a datacollection program, invoked by the ″cms″ shell command. Elapsed times wereobtained using the STCK instruction. User machine CPU times were obtainedusing diagnose code X ′ 0C′ . Byte file system server CPU times were obtainedusing the CP QUERY TIME command. BFS server statistics were collected usingthe QUERY FILEPOOL COUNTER command.
The data reported for each command are based upon a single execution of thatcommand. The result shown have been adjusted to subtract out the elapsedtime and CPU time required to collect the data.
The measurements were made on a non-dedicated 9021-900 during low usageconditions when contention from other system activity was minimal. The BFSserver was dedicated during the measurement. Multiple measurement runswere obtained to verify repeatability. The results, shown in Table 28, are from atypical measurement run.
Table 28. Performance of POSIX shell commands on a 9021-900 processor
Command
ElapsedTime(sec)
TotalCPU Time
(sec)
UserCPU Time
(sec)
ServerCPU Time
(sec)BFS
CallsBFSI/Os
cat_20cat_100
0.210.44
0.120.30
0.110.26
0.010.04
1216
1937
cdchmod
0.030.08
0.010.06
0.010.05
0.000.01
18
910
cp_1cp_20cp_100
0.190.300.42
0.090.090.11
0.070.070.08
0.020.020.03
161418
262344
c89_nullc89_72k
4.055.43
0.572.13
0.532.05
0.040.08
3778
4155
diffechogreplsmkdirmvpspwdrmrmdir
0.450.010.140.180.160.150.090.010.130.14
0.360.020.070.130.070.060.060.010.070.07
0.340.010.060.100.050.050.050.000.050.05
0.020.010.010.030.020.010.010.010.020.02
251
1125111112
198
170
1534292613
32637
sort_1sort_20sort_100
0.360.546.14
0.120.435.69
0.080.405.63
0.040.030.06
272631
403077
88 VM/ESA 2.1.0 Performance Report
POSIX
Notes:
1. A command name that is suffixed with ″_nn″ means that the file involved isnn Kbytes in size. For example, ″cp_20″ means that the cp command copiesa 20KB file.
2. ″c89_null″ means that the c89 command is used to compile a null (4 lines) Cprogram. ″c89_72″ means that the c89 command is used to compile a 72KB(2050 lines) C program.
For all but the c89 cases, BFS I/Os represent all the DASD I/Os that occurredduring that command ′ s execution. The c89 command caused additional DASDI/Os to occur because it used a number of CMS files during the compilation. Forexample, the included header files were in an SFS directory that resided in aseparate SFS file pool.
The total CPU time required to run a shell command (for example, “rm”) istypically much higher than the CPU time required to run an analogous CMScommand (for example, “erase”). A major contributing factor is that shellcommands must be run in an asynchronous, multitasking environment whereasCMS commands run serially on the command thread. In spite of the increasedCPU usage, most shell commands are quite responsive. Most of the commandsin Table 28 show subsecond response time.
Shell Commands: Real Storage RequirementsInstruction traces of the user virtual machine were collected for two exampleshell commands — ″ ls″ and ″ rm ″. The ″ ls″ command listed the (16) files in thecurrent directory. The ″ rm ″ command removed a small file from the currentdirectory. These traces were used to determine the number of unique pagesthat were referenced during the execution of each traced command and duringthe combined execution of both traced commands. Each referenced page wasclassified as non-shared or shared based upon whether there is one such pageper CMS user or whether there is just one instance of that page that is sharedamong all CMS users (by being in a shared segment or saved system). Theresults are summarized in Table 29.
The count of non-shared pages is more important than the count of sharedpages because each user doing that function requires a separate set ofnon-shared pages. All of the shared pages are from the CMS saved system.
Such a small sample of commands can only give a very general idea of the realstorage requirements associated with running shell commands. The resultssuggest that most shell commands tend to reference the same shared pages butthey reference somewhat different subsets of the non-shared pages.
Table 29. Real storage requirements of two example shell commands
Command
UniqueNon-Shared
Pages
UniqueShared
Pages
rmls
572571
312311
both 709 319
New Functions 89
POSIX
Large File PerformanceA series of single-user measurements was obtained to explore the performanceof sequentially writing and reading large BFS files using the write() and read()functions. File size and the number of bytes transferred per request werevaried.
Elapsed times were obtained by an assembler routine that used the STCKinstruction. User machine CPU times were obtained using the clock() function.Byte file system server CPU times were obtained using the CP QUERY TIMEcommand. BFS server statistics were collected using the QUERY FILEPOOLCOUNTER command.
The measurements were taken on a non-dedicated 9021-900 during low usageconditions when contention from other system activity was minimal. The BFSserver was dedicated during the measurements. Multiple measurement runswere obtained to verify repeatability. The results, shown in Table 30, are from atypical measurement run.
Table 30. Large fi le performance on a 9021-900 processor
Case
ElapsedTime(sec)
MB/Second
TotalCPU Time
(sec)
UserCPU Time
(sec)
ServerCPU Time
(sec)BFS
CallsBFSI/Os
write, 0.5MB, 128 bytes/requestwrite, 0.5MB, 4KB/requestwrite, 0.5MB, 64KB/request
0.960.460.44
0.521.081.13
0.630.130.10
0.550.050.03
0.080.080.07
333333
888887
write, 2MB, 128 bytes/requestwrite, 2MB, 4KB/requestwrite, 2MB, 64KB/request
3.881.821.73
0.521.101.16
2.510.460.41
2.230.180.13
0.280.280.28
110110110
329331330
read, 0.5MB, 128 bytes/requestread, 0.5MB, 4KB/requestread, 0.5MB, 64KB/request
0.760.250.25
0.662.002.02
0.580.080.06
0.550.050.03
0.030.030.03
292929
323232
read, 2MB, 128 bytes/requestread, 2MB, 4KB/requestread, 2MB, 64KB/request
3.081.060.97
0.651.892.06
2.370.310.26
2.260.210.16
0.110.100.10
158158158
120120120
Most of the BFS calls are write or read requests. For example, in the first caselisted in the table, 26 of the 33 BFS calls are write requests. The byte file systemreads ahead and writes behind up to 5 4KB blocks at a time.9 This is reflected inthese results. For example, the first case involves writing a 0.5MB file, which is128 4KB blocks. 128 blocks / 26 BFS writes = 4.92 blocks per BFS write.
In the read cases, the number of BFS I/Os is approximately equal to the numberof BFS calls. This reflects the fact that most of these calls are read requests andmost of those requests are satisfied by the use of one multi-block I/O request. Inthe write cases, the number of BFS I/Os is approximately 3 times higher than thenumber of write requests. This reflects the fact that each write request typicallyresults in one multi-block I/O request to write the file data and one forced writeto each of the two BFS file pool logs.
9 This is analogous to, but different from, the read-ahead, write-behind mechanism used by the CMS file system (minidisks andSFS).
90 VM/ESA 2.1.0 Performance Report
POSIX
User machine CPU time decreases as the number of bytes per write or readrequest increases, reflecting the increased efficiency achieved by having fewerfunction calls to process. Server machine CPU time is essentially independentof the number of bytes per write or read request because this has little effect onthe number of BFS calls that are made or the number of bytes requested perBFS call.
Byte File System LoadingA series of measurements was collected to assess the performancecharacteristics of a BFS server as a function of increased loading.
The load was applied by 1 to 8 disconnected user virtual machines runningconcurrently. Each user machine executed the same workload. There was nodelay (think time) between requests so the user machines represent batchactivity. The workload consisted of writing, reading, and erasing large files (130.5MB files, 14 1MB files, and 15 2MB files). Each user machine had a separateworking directory and worked with its own set of files. However, all of these filesresided in one byte file system in the same BFS server.
The measurement interval was from first user start until last user end. All usersstarted and ended their work within a few seconds of each other. Monitor datawere collected at 10-second intervals during this period and later reduced byVMPRF. QUERY FILEPOOL COUNTER report output was collected before andafter the measurement interval.
The BFS server data (storage group 1 and storage group 2) were spread across10 3390-2 volumes. Two of these volumes also contained a log minidisk. Fivevolumes (with one log minidisk) were behind one 3990-3 control unit, while theother five volumes (with the other log minidisk) were behind a separate 3990-3control unit.
Workload: Reading and Writing Large BFS Files
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage:
Real: 256MBExpanded: 0MB
Tape: 3480 (Monitor)
DASD:
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 10 R10
10 DASD fast write was enabled for one of the measurements (see discussion).
New Functions 91
POSIX
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Software Configuration
Virtual Machines:
Measurement Discussion: Megabytes per second (read or written) was selectedas the measure of overall throughput. Figure 9 shows a plot of throughput as afunction of the number of concurrent user virtual machines. Table 31summarizes the performance information that was collected at each measurednumber of users.
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
BFSERV1 1 BFS 64MB/XC 1500 1300 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 1-8 Users 32MB/XC 100
Figure 9. BFS throughput vs number of users on a 9121-480
The results show that the BFS server was able to support a high level ofconcurrency. The measured configuration achieved a maximum throughputexceeding 1.4 MB/sec. As discussed below, the I/O subsystem was the limitingfactor in this case.
92 VM/ESA 2.1.0 Performance Report
POSIX
Notes:
1. Seconds/File is 1/(Files/sec/User). It is the average elapsed time required toread or write a file. The average file is 1.2MB in size.
2. CPU-Seconds/File is (CPU Time)/(Files Processed), where CPU Time is(Elapsed Time)*2*(CPU Utilization)/100. CPU Utilization is multiplied by 2because there are 2 processors on the 9121-480.
3. BFS Server Pct Busy is (total CPU-seconds used by the BFS server)/(ElapsedTime). It represents the percentage of the time that the BFS server isrunning on a processor. Since the BFS server can only run on oneprocessor at a time, this cannot exceed 100% and thus represents apotential limiting factor.
4. The two BFS logs are on MDSK01 and MDSK09. Because the BFS file poolwas implemented symmetrically across two control units and because logwrites are done to both logs, the performance results for each log DASD arenearly identical. Results for MDSK09 are shown here.
5. The performance characteristics of the remaining 8 non-log DASD in the BFSfile pool are averaged and shown under Avg Non-log DASD.
6. Service Time is the average amount of time it takes to complete the DASDI/O request once it has been initiated.
7. Response Time is Service Time plus the amount of t ime spent waiting beforethe I/O can be initiated because the path to the required DASD volume isbusy.
For the measured configuration, the I/O subsystem was the limiting factor thatdetermined maximum throughput. More specifically, contention at the two
Table 31. BFS performance vs number of batch users on a 9121-480
Batch User Machines 1 2 4 8
Elapsed Time (sec)Megabytes TransferredMB/sec
177101
0.57
214202
0.94
328404
1.23
597808
1.35
Files ProcessedFiles/sec/User
840.475
1680.393
3360.256
6720.141
Seconds/FileCPU-Seconds/File
2.111.03
2.541.06
3.911.08
7.111.20
CPU Util izationBFS Server Pct Busy
24.513.9
41.724.3
55.131.4
60.235.6
Highest Channel Busy 8.9 9.5 9.4 9.0
Log DASD (MDSK09):IOs/secPercent BusyService Time (msec)Response Time (msec)
17.323.113.313.3
29.438.413.015.3
38.345.411.914.5
40.551.212.715.2
Avg Non-log DASD:IOs/secPercent BusyService Time (msec)Response Time (msec)
2.02.8
14.414.4
3.15.8
18.920.4
4.18.5
20.933.4
4.59.2
23.451.5
Note: All CPU, channel, and DASD performance data are from VMPRF.
New Functions 93
POSIX
control units appears to have been the limiting factor. This is indicated by theAvg Non-log DASD results. Service Time remains at reasonable levels whileprogressing from 1 to 8 users, while Response Time increases greatly. Themoderate DASD service times and low DASD utilizations show that there is noproblem doing the I/Os once the path to the DASD is clear. The very lowchannel utilizations show that channel contention is not a problem.
The primary source of control unit contention was from the very high I/O rates tothe BFS logs. This meant that whenever an I/O was to be started to a non-logDASD, it was likely that the control unit was busy handling a log I/O request.The DASD response time of the log DASD was less affected at high loadingsbecause the other 4 DASD behind the same control unit had relatively low I/Orates.
There are, of course, many other potential limiting factors. Which factor servesto limit throughput depends upon the relative capacities of the various hardwarecomponents in the configuration. For these measurements, the overall capacityof the 480 processor was not limiting, as evidenced by the fact that CPUUtilization only reached 60%. The CPU utilization in the BFS server is anotherpotential limiting factor, but BFS Server Pct Busy only went up to 36%.
The fact that the limiting factor can change depending upon the configuration isillustrated by an additional 8-user measurement that was done with DASD fastwrite (DFW) enabled for the 10 BFS file pool DASD (see Table 32). The presenceof write caching greatly improved the I/O subsystem, allowing the throughputrate to rise from 1.35 to 1.95 MB/sec. This new configuration is limited by the9121-480′ s processing capacity, as evidenced by the 86% CPU utilization.
At high levels of contention, deadlock conditions can occasionally occur in theBFS server. When the BFS server identifies a deadlock condition, it breaks thedeadlock by failing one of the participating requests. This occurred during anattempted measurement at 12 users and the message ″EDC5116I Resourcedeadlock avoided″ was displayed on the affected user virtual machine ′ s console.
94 VM/ESA 2.1.0 Performance Report
POSIX
Table 32. The effect of DASD fast wri te on BFS throughput
DASD Fast WriteBatch User Machines
NO8
YES8
Elapsed Time (sec)Megabytes TransferredMB/sec
597808
1.35
414808
1.95
Files ProcessedFiles/sec/User
6720.141
6720.203
Seconds/FileCPU-Seconds/File
7.111.20
4.931.06
CPU Util izationBFS Server Pct Busy
60.235.6
86.349.8
Highest Channel Busy 9.0 13.2
Log DASD (MDSK09):IOs/secPercent BusyService Time (msec)Response Time (msec)
40.551.212.715.2
58.219.9
3.43.8
Avg Non-log DASD:IOs/secPercent BusyService Time (msec)Response Time (msec)
4.59.2
23.451.5
6.25.68.9
14.6
Note: All CPU, channel, and DASD performance data are from VMPRF.
New Functions 95
DCE
DCEThis section provides performance measurement results for IBM OpenEditionDistributed Computing Environment for VM/ESA (VM DCE). The following topicsare covered:
Idling OverheadSingle Thread RPC MeasurementsRPC Throughput Measurements
All measurement results provided in this section were made with TCP/IP for VMVersion 2 Release 3 and a pre-ship version of VM DCE running on VM/ESA 2.1.0.
Idling OverheadThe DCECORE virtual machine and VM DCE application servers set up varioustimer events and take timer interrupts as part of carrying out their functions. Asa result, there is a small amount of overhead associated with these servervirtual machines once they are started, even when they are not in use.
Three measurements were taken on a dedicated 9121-320 system to quantify thisidling overhead. Monitor records were collected for each measurement andlater reduced by VMPRF. The first measurement was a base measurement of anidle VM/ESA 2.1.0 system taken before any DCE-related servers were started.The SYSTEM_SUMMARY_BY_TIME report showed that the system was 1.8%busy. This was due to CP functions (such as scheduling and monitor),MONWRITE, and TCP/IP.
The second measurement was taken about a minute after starting the DCECOREvirtual machine. The system utilization rose to 2.1%, indicating that theDCECORE virtual machine was using 0.3% of the processor.
A VM DCE server application was then started and the third measurement wastaken. The system utilization remained at 2.1%. The USER_RESOURCE_UTILreport showed a slight amount of processor usage in the server virtual machine,but it was apparently not enough to affect the reported total system processorusage, which is shown with a precision of 0.1%. Each application server virtualmachine has its own timer activity, so several such servers could have ameasurable effect on total system idling overhead.
Since the frequency of timer interrupts handled by the DCECORE virtual machineand DCE application servers is independent of processor speed, you can expectthe DCE-related idling overhead, as a percentage of total processor capacity, tobe higher than 0.3% on slower processors and lower than 0.3% on fasterprocessors.
96 VM/ESA 2.1.0 Performance Report
DCE
Single Thread RPC MeasurementsSingle thread remote procedure call measurements were obtained for 24different RPC types on 3 different hardware configurations.
The RPCs were executed by an internal RPC performance driver application.The client side of the application was executed on an AIX system running on anRS/6000. The server side was executed in a VM/ESA virtual machine running onan ES/9000 system. The client and server machines were in the same DCE cell.
The AIX and VM/ESA systems were connected through a 16 megabit IBM TokenRing. During the measurements, the AIX and VM/ESA systems were dedicated.The token ring was not dedicated so the reported response times are somewhatinfluenced by the presence of other activity on the LAN. However, repeatmeasurements and measurements obtained during low usage hours indicate thatextraneous activity on the LAN did not have an appreciable effect on the results.
The 24 measured RPC cases consisted of 4 different sizes, using each of the 6available RPC protection levels. For each such case, the measurementconsisted of one AIX client thread sending consecutive RPCs (no think time) tothe application server on VM/ESA for 2 minutes. For each RPC, the client sentthe requested number of bytes to the server and waited for the server ′ sresponse. The server echoed the same amount of data back to the client. Forboth the send and the reply, the data were in the form of a single binaryargument value.
RPC counts and RPC response time data were obtained from the performancedriver. Monitor records were collected on the VM/ESA system at 6-secondintervals and reduced by VMPRF. The CPU usage data in theSYSTEM_SUMMARY_BY_TIME and USER_RESOURCE_UTIL reports were used, inconjunction with the RPC counts, to calculate ES/9000 CPU usage per RPC.
A set of RPC measurements was collected on three different configurations.
Table 33. Measured configurations
Config 1 Config 2 Config 3
AIX system (RS/6000 model)AIX release levelVM/ESA systemVM/ESA system processorshost/LAN connection
2203.2.49121-32013172-1
5203.2.59121-62111
23172-3
5203.2.59121-62111
2OSA-112
For results, see .. Table 34 Table 35 Table 36
11 For config 2, this system was configured by physically partitioning a 9121-742 (4-way) or by varying two of the processorsoffl ine from the hardware configuration screen. For config 3, this system was configured by bringing up a single image9121-742 and then using the CP VARY PROCESSOR command to vary two of the alternate processors offline. This was doneto expedite switching between 3172-3 and OSA-1 LAN connectivity. The performance differences between these methods ofconfiguring a “9121-621” are small and can be ignored when making comparisons.
12 IBM S/390 Open Systems Adapter
New Functions 97
DCE
Table 34. Single thread RPC performance: Config 1 (9121-320, 3172-1, RS/6000 220)
KB Sent/Returned
Protection LevelResponse time 9121-320 CPU time per RPC
Average Low Total Server TCP/IP Other
014
32
nonenonenonenone
20.539.940.5
171.0
19.224.729.7
146.2
13.514.214.790.3
9.19.39.1
55.5
3.64.04.4
28.4
0.80.91.26.4
014
32
connectconnectconnectconnect
21.040.240.7
171.1
19.824.629.4
148.0
13.814.515.891.5
9.39.49.5
56.8
3.74.04.4
28.4
0.81.11.96.3
014
32
callcallcallcall
21.040.240.6
171.5
19.724.130.0
147.6
13.814.515.091.0
9.39.49.5
57.0
3.73.74.4
28.5
0.81.41.15.5
014
32
pktpktpktpkt
21.040.240.6
170.5
19.724.929.8
148.5
13.814.515.091.1
9.29.49.5
56.7
3.74.04.4
28.3
0.91.11.16.1
014
32
pkt_integpkt_integpkt_integpkt_integ
22.040.351.2
218.5
20.627.639.4
198.5
14.416.622.2
146.8
9.711.416.2
110.5
3.73.74.3
29.0
1.01.51.77.3
014
32
pkt_privacypkt_privacypkt_privacypkt_privacy
23.250.583.3
400.5
21.836.268.6
383.8
15.122.443.2
307.9
10.516.836.7
265.8
3.73.84.8
29.9
0.91.81.7
12.2
Note: All times are in milliseconds. All CPU results are from VMPRF.
The protection levels have the following meanings:
none No protection.
connect Provide protection when the client connects with the server.
call Provide protection when the server receives the request.
pkt Ensure that all data received is from the expected client.
pkt_integ Ensure that none of the data transferred between client and serverhas been modified.
pkt_privacy All of the above protection plus encryption.
Like all response times, RPC response times are determined by the combinationof many variables such as processor speeds, network capacity, and resourcecontention. Because of this, the RPC response time results shown in this andthe following tables should be viewed as illustrative examples.
Because the token ring was not dedicated, the average response times areincreased to some extent by the presence of extraneous activity on the LAN.This extraneous activity has little effect on low response time, which typicallyoccurs when token ring contention is not present at that point in time. If thetoken ring had been dedicated, it is expected that the average response timeswould fall between the low and average response time values shown in theresults tables.
98 VM/ESA 2.1.0 Performance Report
DCE
Total CPU time per RPC is based on the Pct Busy column in theSYSTEM_SUMMARY_BY_TIME VMPRF report. This time includes all CPU usagein the system, including CP system time that cannot logically be attributed to anygiven user. Server and TCP/IP time per RPC are based on the Total CPUSeconds column in the USER_RESOURCE_UTIL report. This includes all virtualCPU time consumed by that virtual machine plus all CP CPU time that is used tosatisfy its requests for CP services. Other CPU time per RPC is calculated asTotal - (Server + TCP/IP). This component basically corresponds to thenon-DCE and DCE idling overhead discussed in “Idling Overhead” on page 96.Most of it is CP system time and is not related to the RPC activity beingmeasured.
The results show that the performance of the first four protection levels is aboutthe same and is essentially independent of RPC size. In contrast to this, theperformance of pkt_integ and pkt_privacy is very size-dependent. For the nullRPC case (0KB), response times and CPU usage are not much higher forpkt_integ and pkt_privacy as compared to the first four protection levels. Thedifference grows rapidly, however, as RPC size increases.
As might be expected, response time and ES/9000 CPU usage increase withincreasing RPC size.
VM DCE uses a maximum packet size of 4274 bytes. This is hardcoded and isnot a tuning variable. For the first four DCE protection levels (those whose costis independent of RPC request size), CPU time per RPC is roughly proportionalto the number of RPC packets required to contain the request. This is not shownvery well in the results table because it lacks intermediate sizes between 4KBand 32KB, but this has been verified by additional measurements at 8KB and16KB.
The packet size used by the lower level transport layer is typically smaller.When that is the case, these packets are combined into RPC packets andfragmented from RPC packets by TCP/IP. This is the main reason why CPU timeper RPC in TCP/IP increases when going from 0KB to 1KB to 4KB RPCs. For themeasurements in this report, the transport layer packet size was 1500 bytes(plus header length).13
All of the RPC results in this report are for non-idempotent14 requests. Additionalmeasurements (not shown) yielded equivalent results for a corresponding set ofRPCs that were declared as being idempotent. The results were equivalentbecause the think time used (zero) was short enough that separateacknowledgements to the server were not required so the idempotentoptimization did not apply.15
13 This is specified in the TCP/IP configuration file (the default name is PROFILE TCPIP). See “Tuning Performance” in TCP/IPVersion 2 Release 2 for VM: Planning and Customization for performance considerations.
14 A non-idempotent request must execute either once, partially, or not at all. An idempotent request can safely be done morethan once. That is, even if it is done more than once, it yields the same results and produces no undesirable side effects.Each different RPC call type is declared as being idempotent or not in the interface definition language (IDL) file.
15 If the RPC is non-idempotent, the RPC runtime in the client must send an acknowledgement to the server that it hassuccessfully received the RPC response. If the client thread sends the server another RPC within 3 seconds, this subsequentRPC serves as the acknowledgement and there is no overhead to handle a separate acknowledgement. Consequently, an
New Functions 99
DCE
Because these are single thread measurements and because the measuredRPCs do not incur any I/O or other delays while they are being handled in theVM/ESA system, the total ES/9000 CPU time per RPC shows approximately howmuch of the observed average response time delay is in the server system.Consider the first RPC case (0 KB, protection level of none) as an example.About 13.5 msec of the 20.5 msec response time is spent in the 9121-320 serversystem. The remaining 7.0 msec represents time spent in the RS/6000 model220 client system handling its half of the RPC processing, along withtransmission latency in the token ring and in the 3172-1 control unit.
The measurements in this report are for the case where the bytes aretransmitted as a single binary argument value. CPU usage per RPC would haveonly been slightly higher if the bytes had been sent as multiple binary argumentvalues. The presence of arguments of a data type that requires conversion(such as character or floating point) will increase RPC CPU usage moresignificantly.
The measurements in this report are for the case where client and server bothreside in the same DCE cell. Similar results can be expected for the case wherethe hardware configuration remains the same but the two nodes are configuredin two separate DCE cells. The reason for this is that once the binding betweenclient and server is complete, subsequent RPCs flow directly between client andserver in the same manner regardless of what cells they reside in.
The measurements in this report are for the case where the server applicationresides in a VM/ESA system, while the client is on another node elsewhere inthe network. The processing required to handle an RPC request on the clientside is quite similar to the processing that is required on the server side. Thishas been confirmed by additional measurements (not shown) where the clientside was run on the measured VM/ESA system and the server was on the AIXsystem.
idempotent RPC performs no better than an equivalent non-idempotent one if the client thread ′ s RPCs are separated by lessthan 3 seconds. However, when the spacing is more than 3 seconds, an idempotent RPC performs better because theseparate acknowledgement is avoided.
100 VM/ESA 2.1.0 Performance Report
DCE
Table 35. Single thread RPC performance: Config 2 (9121-621, 3172-3, RS/6000 520)
KB Sent/Returned
Protection LevelResponse time 9121-621 CPU time per RPC
Average Low Total Server TCP/IP Other
014
32
nonenonenonenone
19.920.020.691.3
14.415.217.284.3
9.09.09.4
60.3
6.26.26.2
39.6
2.22.22.4
17.5
0.60.60.83.2
014
32
connectconnectconnectconnect
20.020.121.191.7
13.815.519.084.8
9.09.19.6
61.1
6.46.26.4
40.5
2.22.22.5
17.6
0.40.70.73.0
014
32
callcallcallcall
20.020.221.191.3
13.715.719.085.1
9.09.19.5
61.0
6.46.36.4
40.4
2.02.22.5
16.8
0.60.60.63.8
014
32
pktpktpktpkt
19.820.021.292.6
13.515.218.985.5
9.09.29.6
61.7
6.36.46.4
40.9
2.22.22.5
17.7
0.50.60.73.1
014
32
pkt_integpkt_integpkt_integpkt_integ
20.020.130.3
135.2
14.117.527.3
118.8
9.510.614.798.6
6.77.8
11.177.4
2.22.22.5
17.9
0.60.61.13.3
014
32
pkt_privacypkt_privacypkt_privacypkt_privacy
20.030.262.7
343.0
15.426.458.6
332.4
9.914.929.9
212.9
7.011.825.6
188.0
2.22.32.6
17.1
0.70.81.77.8
Note: All times are in milliseconds. All CPU results are from VMPRF.
Compared to the config 1 results shown in Table 34, these config 2 results showlower response times and ES/9000 CPU usage. The CPU times per RPC are 30%to 37% lower. This is in proportion to the speed difference between the 9121-320processor and one of the 9121-621 processors.
For most of the RPC cases, the response time decreases exceed the CPU timedecreases. This is due to the other configuration differences (3172 model andRS/6000 model). The fact that the 9121-621 is a 2-way has little significance forthese single thread measurements because, for the most part, only oneprocessor at a time is being used.
New Functions 101
DCE
Table 36. Single thread RPC performance: Config 3 (9121-621, OSA-1, RS/6000 520)
KB Sent/Returned
Protect LevelResponse time 9121-621 CPU time per RPC
Average Low Total Server TCP/IP Other
014
32
nonenonenonenone
12.213.316.968.3
11.512.715.664.7
8.99.09.7
63.8
6.16.36.2
40.5
2.22.12.7
19.9
0.60.60.83.4
014
32
connectconnectconnectconnect
12.513.817.068.9
11.812.915.965.4
9.09.19.8
64.3
6.36.36.4
40.8
2.12.22.7
20.1
0.60.60.73.4
014
32
callcallcallcall
12.513.717.068.9
11.811.415.965.2
9.09.19.8
64.3
6.36.36.4
40.7
2.12.22.7
20.1
0.60.60.73.5
014
32
pktpktpktpkt
12.513.617.169.7
11.312.915.963.0
9.09.19.8
64.4
6.36.36.3
40.6
2.12.22.7
19.7
0.60.60.84.1
014
32
pkt_integpkt_integpkt_integpkt_integ
13.316.325.2
113.4
12.615.524.097.7
9.310.715.0
103.4
6.77.8
11.277.4
2.12.22.7
20.8
0.50.71.15.2
014
32
pkt_privacypkt_privacypkt_privacypkt_privacy
14.425.358.0
331.2
13.624.355.6
324.0
9.815.230.6
217.6
7.111.825.6
186.8
2.22.32.9
19.2
0.51.12.1
11.6
Note: All times are in milliseconds. All CPU results are from VMPRF.
The only difference between config 2 and config 3 is that config 2 uses a 3172-3for LAN connectivity, while config 3 uses OSA-1 (in TCP/IP passthrough mode).As a result, all differences beyond normal run variability should be attributableto the differences between these two host/LAN connectivity methods.
The OSA-1 results show CPU usages per RPC that are similar to the 3172-3results. The results match within 2% for the 0KB and 1KB cases and are 2% to6% higher for the 4KB and 32KB results. As might be expected, the CPU usageincreases observed for the 4KB and 32KB cases are mostly in the TCP/IP virtualmachine and presumably reflect differences in how the 3172-3 and OSA-1interact with TCP/IP.
The OSA-1 results show significantly lower response times relative to thecorresponding 3172-3 results. Most of the RPC cases have average responsetime decreases in the 20% to 40% range. The larger RPCs using pkt_integ andpkt_privacy are exceptions. They show smaller response time decreasesbecause the response times for these cases are more dominated by CPU usagein the client and server systems.
102 VM/ESA 2.1.0 Performance Report
DCE
RPC Throughput MeasurementsA set of RPC throughput measurements was collected for each of the threeconfigurations described in Table 33. A non-idempotent 1KB RPC withprotection level of none was used for all of these measurements. Theperformance driver application that was used for the single threadmeasurements was also used for these measurements. For each configuration,the degree of loading on the ES/9000 server system was progressively increasedby increasing the number of concurrent client threads until maximum capacitywas achieved. As with the single thread measurements, there was no think timebetween RPC requests.
For each of the three configurations, a set of measurements was obtained forwhich all RPCs were directed to a single application server virtual machine.That server was started with 10 threads for processing incoming RPCs.
For the 9121-621 3172-3 measurements (config 2), an additional set ofmeasurements was obtained using two application servers. For thosemeasurements, half of the total load originated from one AIX RS/6000 clientsystem and was directed to one server virtual machine, while an equivalent loadoriginated from a second AIX RS/6000 client machine and was directed to asecond 10-thread server virtual machine.16
Each measurement was 5 minutes long. The RPC throughput rate, RPC count,and RPC response time data were obtained from the performance driver.Monitor records were collected on the VM/ESA system at 6-second intervals andreduced by VMPRF. The CPU usage data in the SYSTEM_SUMMARY_BY_TIMEand USER_RESOURCE_UTIL reports were used, in conjunction with the RPCcounts, to calculate ES/9000 CPU usage per RPC. Data from these same reportswere also used to determine average processor utilization and average serverutilization. Average server utilization was calculated as:
100*(ServerTCPU/Servers)/ElapsedTime
ServerTCPU is total CPU time used by the application servervirtual machine(s), in seconds.
Servers is the number of server virtual machines (1 or 2).
ElapsedTime is the measurement duration, in seconds.
Figure 10 shows a plot of RPC throughput as a function of the total number ofconcurrent client threads for each of the 3172-based measurement sequences.Table 37, Table 38, and Table 39 provide additional data for thesemeasurements. Figure 12 compares the 3172-3 and the OSA-1 results for the9121-621 single server case. The OSA-1 results are provided in Table 40.
16 Two client machines were used for these 2 server measurements because RPC throughput would otherwise have becomelimited by the CPU capacity of the (RS/6000 model 520) client machine. One client machine could have been used if anRS/6000 model with sufficiently higher capacity had been available. The second client machine was an RS/6000 model 250running AIX 3.2.5.
New Functions 103
DCE
Figure 10. VM DCE throughput capacity on various ES/9000 processors
104 VM/ESA 2.1.0 Performance Report
DCE
9121-320 Results
Throughput reaches a limit at about 77 RPCs per second (Table 37 ) because9121-320 processor capacity has been reached. At 4 threads, processorutilization has reached 99.4%.
Average response time increases slowly as RPC throughput is increased untilthe processor approaches saturation at about 3 threads. The large increasewhen going from 3 to 4 threads reflects the fact that throughput can onlyincrease slightly so the load applied by the fourth thread mostly causes theRPCs to experience longer delays.
CPU time per RPC decreases as the RPC throughput rate increases. This isbecause the constant amount of overhead from the timer-driven functions inTCP/IP, the DCECORE virtual machine, the application server, CP monitor, andother CP functions is being pro-rated across a larger number of RPCs.
The 1 thread measurement is essentially equivalent to the single threadmeasurement for the 0KB, protection level none RPC case shown in Table 34.Note that the total CPU times per RPC are very close (14.1 and 14.2 msec,respectively).
Table 37. 1KB RPC throughput: config 1 (9121-320, 3172-1), 1 server
Run IDVM Appl. ServersAIX Client Threads
T311211
T312212
T313213
T314214
Rate (RPCs/sec)Avg Response Time
25.039.9
49.340.4
70.742.3
77.351.6
Util izationProcessorServer
35.323.0
65.444.7
92.664.0
99.469.3
CPU time per RPCTotal
ServerTotalVirtualCP
TCP/IPTotalVirtualCP
Other
14.1
9.28.40.8
3.91.62.3
1.0
13.3
9.18.20.9
3.41.42.0
0.8
13.1
9.18.30.8
3.41.42.0
0.6
12.9
9.08.20.8
3.31.32.0
0.6
Note: All times are in milliseconds. All CPU results are from VMPRF.
New Functions 105
DCE
These throughput results are for the case of 1KB RPCs with protection levelnone. Since the throughput of this configuration is limited by 9121-320 processorcapacity, you can estimate maximum throughput for any of the other RPC casesshown in the single thread results tables by using the ratio of CPU time per RPCfor the 1KB protection level none case to the CPU time per RPC for the RPCcase of interest. For example, maximum throughput for 32KB RPCs withprotection level none can be estimated as:
77.3 RPCs/sec * (13.5/90.3) = 12 RPCs/sec
The 13.5 and 90.3 are total CPU time per RPC for the 1KB and 32KB protectionlevel none cases, respectively. They are taken from Table 34.
9121-621 3172-3 Results (1 server)
Throughput reaches a limit at about 158 RPCs per second (Table 38 ). In thiscase, throughput becomes limited by the fact that the server virtual machine17 isfully utilized. At 4 threads, server utilization has reached 93.8%. Total systemCPU utilization is only 64%, so there is additional capacity.
Compared to the 9121-320 results shown in Table 37, these 9121-621 resultsshow significantly lower response times and CPU usage. The decreased CPUtime per RPC is in proportion to the speed difference between the 9121-320processor and one of the 9121-621 processors.
Table 38. 1KB RPC throughput: config 2 (9121-621, 3172-3), 1 server
Run IDVM Appl. ServersAIX Client Threads
T611111
T612112
T614114
T616116
Rate (RPCs/sec)Avg Response Time
49.820.0
97.720.3
154.725.7
158.037.9
Util izationAvg ProcessorServer
21.830.3
41.859.3
62.991.8
64.093.8
CPU time per RPCTotal
ServerTotalVirtualCP
TCP/IPTotalVirtualCP
Other
8.8
6.15.50.6
2.10.81.3
0.6
8.6
6.15.50.6
2.10.81.3
0.4
8.3
6.15.50.6
1.90.71.2
0.3
8.3
6.15.50.6
1.90.71.2
0.3
Note: All times are in milliseconds. All CPU results are from VMPRF.
17 The server virtual machine was configured with one virtual processor. VM DCE does not support client or server virtualmachines configured as virtual multiprocessors.
106 VM/ESA 2.1.0 Performance Report
DCE
TCP/IP and Other CPU usage are a smaller proportion of total CPU usage thanthey are for the 9121-320 measurements. For example, TCP/IP plus Other is 30%of all CPU usage for the highest utilization 9121-320 measurement. This drops to27% for the highest utilization 9121-621 measurement. This is because the idlingoverhead from TCP/IP, the DCECORE virtual machine, and CP is being pro-ratedacross a larger number of RPCs.
9121-621 3172-3 Results (2 servers)
The addition of a second server removed server utilization as a limiting factor.This allowed throughput to rise to 215 RPCs per second (Table 39), at whichpoint throughput became limited by the CPU capacity of the 9121-621 system.
Note that CPU time per RPC is somewhat higher in the 2 server case comparedto the 1 server case.
Table 39. 1KB RPC throughput: config 2 (9121-621, 3172-3), 2 servers
Run IDVM Appl. ServersAIX Client Threads
T621122
T622124
T623126
T624128
Rate (RPCs/sec)Avg Response Time
98.220.3
180.522.2
207.229.0
214.937.2
Util izationAvg ProcessorAvg Server
45.732.0
82.359.3
93.467.8
95.069.1
CPU time per RPCTotal
ServerTotalVirtualCP
TCP/IPTotalVirtualCP
Other
9.3
6.55.90.6
2.31.01.3
0.5
9.1
6.66.00.6
2.20.91.3
0.3
9.0
6.56.00.5
2.20.91.3
0.3
9.0
6.66.00.6
2.20.81.4
0.2
Note: All times are in milliseconds. All CPU results are from VMPRF.
New Functions 107
DCE
Figure 11 shows how the 9121-621 CPU usage is distributed among the DCEapplication servers, the TCP/IP virtual machine, and other sources (systemoverhead plus other virtual machines) for run T6241 (8 threads) in Table 39.Note that 24% of the CPU usage is from the TCP/IP virtual machine. It ispossible for the utilization of the TCP/IP virtual machine to become the limitingfactor on systems that have more processors and more DCE servers, or onsystems that heavily use TCP/IP for other purposes in addition to DCE.
Figure 11. Distribution of 9121-621 CPU Usage (Run T6241)
108 VM/ESA 2.1.0 Performance Report
DCE
9121-320 OSA-1 Results
Figure 12. VM DCE throughput on a 9121-621: OSA-1/3172-3 comparison
Refer to Table 38 for the 3172-3 results and Table 40 for the OSA-1 results.
As was seen in the single thread data, the OSA-1 configuration provided betterresponse times than the 3172-3 configuration, but at a cost of slightly more CPUusage. This determines the shape of the two curves shown in Figure 12.
At 1 and 2 threads, server CPU utilization is not yet a limiting factor. OSA-1response times are less than the corresponding 3172-3 response times and, as aresult, OSA-1 throughputs are higher. As the number of threads approachesabout 4, server utilization becomes the important constraint. Because of this,response time becomes very sensitive to server CPU utilization. Because the3172-3 configuration uses slightly less CPU time per RPC than the OSA-1configuration, the server utilization it generates at a given RPC rate is somewhatlower. As a result, the 3172-3 curve crosses the OSA-1 curve at about 4 threadsand above 4 threads the 3172-3 configuration achieves a somewhat higherthroughput. This has little practical significance, however, because a properlybalanced system would be operated at a much lower server utilization.
New Functions 109
DCE
Table 40. 1KB RPC throughput: config 3 (9121-621, OSA-1), 1 server
Run IDVM Appl. ServersAIX Client Threads
T611OSA111
T612OSA112
T614OSA114
T616OSA116
Rate (RPCs/sec)Avg Response Time
73.813.4
139.014.3
151.926.2
152.439.2
Util izationAvg ProcessorServer
33.146.3
60.886.3
65.194.0
65.494.3
CPU time per RPCTotal
ServerTotalVirtualCP
TCP/IPTotalVirtualCP
Other
9.0
6.35.60.7
2.20.81.4
0.5
8.8
6.25.60.6
2.10.81.3
0.5
8.6
6.25.60.6
2.00.81.2
0.4
8.6
6.25.60.6
2.00.81.2
0.4
Note: All times are in milliseconds. All CPU results are from VMPRF.
110 VM/ESA 2.1.0 Performance Report
GCS TSLICE Option
GCS TSLICE OptionThis section documents the results of a measurement made that used the newcapability of altering the GCS time slice. This study was done to determine thesensitivity of this new tuning option. The VTAM virtual machine was measuredusing a 30 millisecond time slice and compared with the default time slice of 300milliseconds. See “Performance Improvements” on page 9 for more discussionon this item.
Workload: FS8F0R
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage:
Real: 256MB (default MDC)Expanded: 0MB
Tape: 3480 (Monitor)
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 6 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
Control Unit NumberLines per
Control Unit Speed
3088-08 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
SMART 1 RTM 16MB/370 3 % 400 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XC 10000 512 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 1900 Users 3MB/XC 100
New Functions 111
GCS TSLICE Option
Measurement Discussion: The following table shows that there was little effectto system performance when the GCS time slice was altered for the VTAMmachine handling the CMS terminals. The key indicators of external responsetime (AVG LAST(T)) and internal throughput rate (ITR(H)) both show variationsthat are within normal run variation. The external response time decreased by2.1% and the internal throughput improved by 0.2%. The most significant effectfor this tuning parameter should be in running GCS applications that have alarge number of tasks or sub-tasks.
Table 41 (Page 1 of 3). GCS time slice for the VTAM machine on a 9121-480
GCS TSLICEReleaseRun ID
3002.1.0
L28E190D
302.1.0
L28E190GDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1260.3650.2860.2510.2230.305
0.1210.3590.2790.2460.2300.312
-0.005-0.006-0.007-0.0040.0070.006
-3.97%-1.64%-2.45%-1.75%3.36%2.13%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1858.5766.820.87779.1234.6951.431.0001.000
26.1558.9966.820.88379.2735.0152.011.0021.009
-0.020.420.00
0.0060.150.320.57
0.0020.009
-0.10%0.72%0.00%0.72%0.19%0.92%1.12%0.19%0.92%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
25.27925.293
8.8058.231
16.47417.061
25.23025.293
8.8238.381
16.40716.912
-0.0480.0000.0180.150
-0.066-0.150
-0.19%0.00%0.21%1.82%
-0.40%-0.88%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
168.90169.00
84.4584.50
110.07114.00
83.8984.0048.2550.00
1.531.48
168.58169.00
84.2984.50
109.63113.00
83.6784.0047.9250.00
1.541.50
-0.320.00
-0.160.00
-0.44-1.00-0.220.00
-0.330.000.000.01
-0.19%0.00%
-0.19%0.00%
-0.40%-0.88%-0.26%0.00%
-0.69%0.00%0.21%0.88%
112 VM/ESA 2.1.0 Performance Report
GCS TSLICE Option
Table 41 (Page 2 of 3). GCS time slice for the VTAM machine on a 9121-480
GCS TSLICEReleaseRun ID
3002.1.0
L28E190D
302.1.0
L28E190GDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2756KB400KB
8254806
28.856570.95
1383
2756KB400KB
8254767
28.856630.95
1337
0KB0KB
0-390.0
60.00-46
0.00%0.00%0.00%
-0.07%-0.07%0.11%
-0.11%-3.33%
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
638439
16.119170.800
2.55600
0.0008.456
644440
16.223172.500
2.58200
0.0008.456
61
0.1051.7000.025
00
0.0000.000
0.94%0.23%0.65%1.00%0.99%
nanana
0.00%
QueuesDISPATCH LISTELIGIBLE LIST
36.880.00
36.520.02
-0.360.02
-0.97%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
69510.402
3655.4632.906
19.70041.4
0.0207
9.58196
0.94
69510.401
3685.5072.926
19.70041.3
0.0207
9.52196
0.94
00.000
30.0450.0190.000
-0.10.0
0-0.06
00.00
0.00%0.00%0.82%0.82%0.67%0.00%
-0.22%na
0.00%-0.63%0.00%0.00%
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.86726.506
2.4830.7551.1260.0251.2491.1903.8052.835
11.80053.10035.57749.882
13.87526.508
2.4840.7511.1270.0251.2481.1913.8152.825
11.80753.09935.57749.852
0.0080.0020.001
-0.0040.0010.000
-0.0010.0000.010
-0.0110.006
-0.001-0.001-0.031
0.06%0.01%0.06%
-0.50%0.10%
-1.21%-0.09%0.01%0.26%
-0.37%0.05%0.00%0.00%
-0.06%
New Functions 113
GCS TSLICE Option
Table 41 (Page 3 of 3). GCS time slice for the VTAM machine on a 9121-480
GCS TSLICEReleaseRun ID
3002.1.0
L28E190D
302.1.0
L28E190GDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5074.06581.48002.5858
1.191
4993.99921.48832.5110
1.191
-8-0.06660.0083
-0.07480.000
-1.58%-1.64%0.56%
-2.89%0.02%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
114 VM/ESA 2.1.0 Performance Report
Additional Evaluations
Additional Evaluations
This portion of the report includes results from a number of additional VM/ESAperformance measurement evaluations that have been conducted over the pastyear.
• “VM/ESA on the Server 500” on page 116 examines VM/ESA performancewhen a CMS-intensive workload is run on the PC Server 500 System/390.
• “RAMAC Array Family” on page 120 provides usage guidelines andmeasurement results for using the RAMAC Array Family with VM/ESA.
• “CMS Virtual Machine Modes” on page 125 compares the performance ofrunning CMS users in 370 mode, XA mode, and XC mode virtual machines.
• “370 Accommodation” on page 131 quantifies the performance effects ofrunning CMS users under the 370 accommodation facility.
• “Storage Constrained VSE Guest using MDC” on page 135 explores tuningconsiderations when using minidisk caching with VSE guests in a storageconstrained environment.
• “RSCS 3.2” on page 140 compare the performance of RSCS 3.2 to RSCS 3.1.
• “DirMaint 1.5” on page 145 compares the performance of DirMaint 1.5 toDirMaint 1.4.
• “VTAM 4.2.0” on page 153 compares the performance of VTAM 4.2.0 toVTAM 3.4.1 for a CMS-intensive environment.
Copyright IBM Corp. 1995 115
VM/ESA on the Server 500
VM/ESA on the Server 500The results summarized in this section are excerpted from PC Server 500System/390 Performance (WSC Flash 9522), which also contains CMS plus LANfile server results, VSE/ESA CICS (native and guest) results, MVS/ESA results,and tuning guidance. The interested reader should refer to this Flash and to IBMPC Server 500 S/390 ...Is it right for you? for further information. See“Referenced Publications” on page 5.
IntroductionThe IBM PC Server 500 System/390* is a combination of the System/390 and PCServer architectures that provides access and use of both in a single package.While the S/390* instructions execute natively on a dedicated CMOS chip on theS/390 Microprocessor Complex, the execution of the S/390 ′ s I/O is handled byOS/2 device managers, device drivers, and S/390 channel emulation. The S/390design point in the PC Server S/390 is unique when compared to other S/390processors. In this implementation, S/390 devices (tapes and printers) are eitherchannel attached (via S/370 Channel Emulator/A) or emulated on PC devices in amanner that is transparent to the S/390. In addition to emulating the S/390 I/O,the PC processor also supports OS/2 applications and advanced local areanetwork (LAN) functions.
Measurement DescriptionThe system configuration used for the performance measurements described inthis section is listed in Table 42.
TPNS measurements were obtained by utilizing the AWS3172 device driver andestablishing a VTAM 3172 XCA connection across an isolated Token Ring LAN.
Table 42. System configuration for S/390 performance runs
Hard Drives 3
Approx GB/Drive 2.25G Fast/Wide
Average seek time (ms) 7.5
Average latency (ms) 4.17
Rotational speed (RPM) 7200
Array Stripe Width (KB) 64
Channels used on RAID adapter 1
Logical drive types RAID-5
Logical drives per array 2
Partitions per logical drive 1
Format type HPFS
OS/2 level V3 Warp Fullpack GA
LAN Adapter AutoLAN Streamer* MC 32
SCSI Adapter F/W Streaming RAIDAdapter/A
Note: Seek, latency, and RPM specifications are advertised valuesfor individual drives in the array and were not measured here.
116 VM/ESA 2.1.0 Performance Report
VM/ESA on the Server 500
A PC Server 500 S/390 running VM/ESA ESA Feature with VTAM was connectedto a second server running VM/ESA 370 Feature with VTAM and IBM ′ s TPNS.TPNS simulates actual VTAM cross domain logon sessions to the target system.The simulated VTAM sessions logged onto the VSCS APPL on the target system.CMS users were then logged onto on the measured system. The users ran theirscripts with TPNS measuring the end user response time through the VTAMnetwork.
Measurements were obtained using the FS8F0R workload to get anunderstanding of how many CMS users can be supported in various S/390storage sizes, while maintaining an average response time of (approximately)one second. The following conditions apply to these measurements:
• dedicated
The PC Server 500 was dedicated to processing the CMS workload. That is,there was no other activity coming in over the LAN during the measurementperiod.
• HPFS write caching (lazy on)
• no RAID adapter write caching (write-through)
• VM/ESA Version 1 Release 2.2
• emulated DASD volumes:
2 system volumes (9336)3 page volumes (3380)2 spool volumes (9336)3 minidisk volumes (9336)2 t-disk volumes (9336)
• 16KB CP trace table
• STORBUF 300 200 100
• LDUBUF 600 300 100
Additional Evaluations 117
VM/ESA on the Server 500
Measurement DiscussionThe measurement results are summarized in Table 43.
The results show that, for this workload, the number of CMS users that can besupported is mostly determined by the amount of available S/390 memory.Contention for the S/390 processor is not a significant factor until 128MB aremade available, at which point S/390 processor utilization rises to 80%.
VM/ESA performs better on S/390 when steps are taken to minimize the amountof page I/O that the system has to do. Page I/Os are expensive because eachI/O typically reads or writes multiple 4K pages. In addition, VM/ESA ′ s blockpaging mechanism is optimized for traditional mainframe DASD and thereforedoes not work as well with the device emulation used here. We reduced thepage I/O rate by taking the following tuning actions:
• The minidisk cache BIAS parameter was used to reduce the amount of realstorage that was used for minidisk caching. This left more real storageavailable to reduce paging.
• A small CP trace table was used.
We also found that the response time impact of VM/ESA page I/O tends to bereduced when emulated CKD devices are used as paging volumes and multiple
Table 43. PC Server 500 S/390 Performance - CMS Workload
VM/ESA ReleaseRUN ID
1.2.2PC7E5075
1.2.2PC7E5190
EnvironmentS/390 Real StorageUsersMDC BIAS
32MB70
0.1
128MB1900.2
Response Time (sec)TRIV INTAVG LAST (T)
0.500.99
0.301.02
ThroughputETR (T) 2.44 6.57
S/390 CPU Usage (msec)CPU/CMD (V)CP/CMD (V)EMUL/CMD (V)
15452
102
1223686
S/390 Util izationTOTAL (V) 37.5 80.3
PagingPAGE IO RATE (V)PAGE/CMDPAGE IO/CMD (V)PGBLPGS/USER
7.318.1
3.0100
4.85.60.7
163
I/ORIO RATE (V)MDC REAL SIZE (MB)MDC HIT RATIO
231.8
0.69
3913.70.86
SPM2PC Utilization (S)I/O Req/sec (S)
54.549
66.561
Note: T=TPNS, V=VMPRF, S=SPM2, Unmarked=RTM
118 VM/ESA 2.1.0 Performance Report
VM/ESA on the Server 500
page volumes are defined. This does, however, result in higher PC utilizationsrelative to using FBA page volumes.
The PC utilization arises from handling the S/390 I/O requests. As this utilizationincreases, contention for the PC processor will cause I/O service times (as seenby the S/390) to increase. The PC processor, then, is one of the resources thatcan limit the S/390 I/O rate that can be sustained while still providing acceptableresponse times.
STORBUF and LDUBUF were set to high values in order to discourage eligiblelist formation. This tuning action was taken during preliminary measurements athigh paging rates and was found to be helpful in that environment. Non-defaultSTORBUF and LDUBUF settings may not have been necessary for themeasurements shown here because the paging rates are much lower.
Note that the OS/2 I/O rate (as reported by SPM2) is higher than the S/390 I/Orate (as reported by VMPRF). This is because the S/390 I/O emulation code willsometimes split one S/390 I/O request into multiple OS/2 I/O requests.
Lower throughputs should be expected if there is no write caching in effect(either by OS/2 HPFS or by the RAID adapter). See the Flash for furtherinformation.
Additional Evaluations 119
RAMAC Array Family
RAMAC Array FamilyThis section provides some usage guidelines for the RAMAC Array Family anddocuments the results of a measurement made to observe the effects of using aRAMAC Array Subsystem for VM/ESA. For additional information see the ITSCredbook IBM RAMAC Array Family. For more details on using RAMAC devicesin a VM/ESA environment, see Using the IBM RAMAC Array DASD in an MVS,VM, or VSE Environment.
The following are general performance guidelines for using RAMAC Arraydevices in a VM/ESA environment.
• Insure you are current on service for both hardware and software. Forexample, VM/ESA APAR VM59200 and VMPRF APAR VM59341 are requiredto process cache measurement data for the RAMAC subsystem. In addition,RTM/ESA APAR GC05363 is required for RTM to report correct I/O servicetimes for the subsystem.
• Do proper capacity planning before migrating to a RAMAC environment.Your IBM storage specialist has modeling and sizing resources available.Use them to set appropriate performance expectations.
• Enable cache for RAMAC DASD behind 3990-3 and 3990-6 control units.Significant availability and reliability benefits are provided by use of RAIDtechnology in RAMAC. Sufficient cache is required to get good performancewhen this technology is used. Make sure performance planning includesdetermining appropriate amount of cache and NVS (non-volatile storage).IBM storage specialists recommend enabling cache for all RAMAC DASD.This includes VM/ESA paging (more on paging below), where the traditionalrecommendation is to disable cache. Enabling cache is especiallyworthwhile when the Record Cache II feature of the 3990-6 is available.
• Placement of minidisks within a VM/ESA volume is not as important becausethese logical volumes are really spread across multiple real disks. Based onthis, seeks data from the monitor is not as valuable as in the past, but canstill be used to find heavily used minidisks.
• VM/ESA paging volumes should be low on the list of things to move toRAMAC. RAMAC is meant to increase availability and reliability; paging istemporary. Paging is higher in writes and has poor re-reference patterns.Therefore, if you have a choice between moving paging space or other data,move the other data.
• Watch for data consolidation problems. If you are merging many singledensity volumes onto RAMAC Arrays emulating 3390-3s, recognize that theI/O rate to that logical volume is the aggregate of the merged volumes.
• Balance I/O across drawers where possible. In addition to balancing I/Oacross channels, control units and devices, the RAMAC drawers are a newlevel. While the drawer is typically not expected to be a bottleneck, it is anadditional consideration.
120 VM/ESA 2.1.0 Performance Report
RAMAC Array Family
Workload: FS8F0R
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage:
Real: 256MB (default MDC)Expanded: 0MB
Tape: 3480 (Monitor)
DASD: (base measurement)
DASD: (RAMAC measurement)
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively. The 9394 control unit cache cannot be controlled by software and is always on.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 6 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
9395-B13 9394-2 4 6 4 63390-2 3990-2 4 10 23390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
Control Unit NumberLines per
Control Unit Speed
3088-08 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
SMART 1 RTM 16MB/370 3 % 400 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XC 10000 512 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 1900 Users 3MB/XC 100
Additional Evaluations 121
RAMAC Array Family
Measurement Discussion: A measurement was made where 16 volumes of3390-2 DASD behind a 3990-2 control unit were moved to a 9394-2 RAMACSubsystem configured to emulate 3380K DASD on 9395-B13 DASD. The 16volumes consisted of 6 page, 4 spool, and 6 T-disk volumes. The FS8FCMS-intensive workload was run on both configurations. When running with theRAMAC Subsystem, the external response time (AVG LAST(T)) decreased by5.3% and the internal throughput rate (ITR(H)) did not change significantly. Forother measurement results see Table 45 on page 123.
Table 44 shows a comparison of the service time for the volumes moved to theRAMAC Subsystem. Both the spool and T-disk volumes showed significantimprovement in service time, while the paging volumes showed an increase inservice time. There was over a 25% reduction in the connect time component ofservice time, which reflects the faster data transfer capabilities of the RAMAC.This reduction in connect time was true for all three types of data. Disconnecttime decreased over 40% for spool and T-disk, but increased for the pagingvolumes. This disconnect time change reflects the benefit of control unit cache.The 9394 caching provides much greater benefit to spool and T-disk volumesthan the paging volumes.
Table 44. Volumes moved to RAMAC Array Subsystem
Volume Usage3990/3390
Service TimeRAMAC
Service Time
Spool 14.0 5.5
T-Disk 28.4 16.2
Page 19.1 21.5
Note: Service time is average for each volume usage type and is inmill iseconds.
122 VM/ESA 2.1.0 Performance Report
RAMAC Array Family
Table 45 (Page 1 of 2). RAMAC Array Subsystem on 9121-480
RAMAC UsedReleaseRun ID
NoVM/ESA 2.2
L27E1900
YesVM/ESA 2.2
L27E1903Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1330.4610.3500.3040.2720.385
0.1310.4340.3310.2930.2600.364
-0.002-0.027-0.019-0.011-0.011-0.021
-1.50%-5.86%-5.43%-3.58%-4.05%-5.33%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1758.1666.940.86974.2932.9947.751.0001.000
26.1659.2366.870.88674.3232.9547.561.0000.999
-0.011.07
-0.070.017
0.04-0.04-0.190.000
-0.001
-0.02%1.84%
-0.11%1.95%0.05%
-0.13%-0.39%0.05%
-0.13%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
26.92326.291
8.8268.067
18.09718.225
26.91026.918
8.8458.225
18.06518.693
-0.0130.6270.0190.158
-0.0320.469
-0.05%2.38%0.22%1.96%
-0.18%2.57%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
180.23176.00
90.1188.00
121.15122.00
89.8288.0053.6854.00
1.491.44
179.94180.00
89.9790.00
120.80125.00
89.6290.0053.4355.00
1.491.44
-0.294.00
-0.142.00
-0.353.00
-0.202.00
-0.251.000.000.00
-0.16%2.27%
-0.16%2.27%
-0.29%2.46%
-0.23%2.27%
-0.47%1.85%0.13%
-0.18%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2572KB400KB
8455263
29.154460.94
1251
2576KB400KB
8455107
29.053820.95
1323
4KB0KB
0-156-0.1-64
0.0172
0.16%0.00%0.00%
-0.28%-0.28%-1.18%1.19%5.76%
Additional Evaluations 123
RAMAC Array Family
Table 45 (Page 2 of 2). RAMAC Array Subsystem on 9121-480
RAMAC UsedReleaseRun ID
NoVM/ESA 2.2
L27E1900
YesVM/ESA 2.2
L27E1903Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
628441
15.969180.900
2.70200
0.0008.634
633445
16.121181.100
2.70800
0.0008.928
54
0.1520.2000.006
00
0.0000.294
0.80%0.91%0.95%0.11%0.22%
nanana
3.40%
QueuesDISPATCH LISTELIGIBLE LIST
44.411.14
39.650.00
-4.77-1.14
-10.73%-100.00%
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
6489.680
3625.4082.705
19.60040.8
0.0185
9.18174
0.94
67110.035
3665.4732.765
19.90040.4
0.0192
9.62181
0.93
230.355
40.0660.0600.300
-0.40.0
70.44
7-0.01
3.55%3.66%1.10%1.22%2.21%1.53%
-0.87%na
3.78%4.79%4.02%
-1.06%
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.45726.946
2.4310.7201.0970.0241.2021.0293.4552.714
13.16351.80633.67448.683
13.87127.891
2.4220.7471.1250.0241.2481.0643.5872.817
13.70853.53734.26449.485
0.4150.945
-0.0090.0270.0280.0010.0460.0350.1320.1020.5461.7320.5900.801
3.08%3.51%
-0.36%3.70%2.55%3.70%3.87%3.38%3.81%3.77%4.15%3.34%1.75%1.65%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5103.75941.41912.3403
1.020
5083.78021.43732.3429
1.064
-20.02080.01820.0026
0.044
-0.39%0.55%1.28%0.11%4.31%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
124 VM/ESA 2.1.0 Performance Report
CMS Virtual Machine Modes
CMS Virtual Machine ModesThis section documents the results of measurements made to observe theeffects of running with virtual machine modes of 370, XA, and XC on VM/ESA1.2.2 for a CMS-intensive workload. No exploitation of XA or XC mode was donefor these measurements. With the increased virtual storage availability, there isan opportunity in many customer environments to define additional sharedsegments to reduce real storage requirements, thus improving systemperformance.
Workload: FS8F0R
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage:
Real: 256MB (default MDC)Expanded: 0MB
Tape: 3480 (Monitor)
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 6 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
Control Unit NumberLines per
Control Unit Speed
3088-08 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
SMART 1 RTM 16MB/370 3 % 400 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XC 10000 512 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 1900 Users 3MB/var ied 100
Additional Evaluations 125
CMS Virtual Machine Modes
Measurement Discussion: The following table shows the performance cost whenmigrating from a 370 mode virtual machine to an XA mode virtual machine.External response time (AVG LAST(T)) increased by 8.6%. Internal throughput(ITR(H)) decreased by 2.0%, reflecting an increase in processor usage. Themajority of this increase is in the CMS interrupt handlers to process the XAmode interrupts.
Table 46 (Page 1 of 3). Migration from 370 mode to XA mode for VM/ESA 1.2.2 onthe 9121-480
VM ModeReleaseRun ID
370VM/ESA 2.2
L27E190A
XAVM/ESA 2.2
L27E190BDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1290.4100.3170.2760.2350.327
0.1320.4380.3350.2950.2490.356
0.0030.0280.0180.0190.0140.028
2.33%6.83%5.68%6.84%5.73%8.55%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1458.1966.760.87276.1933.2248.741.0001.000
26.2158.8666.790.88174.6832.9247.620.9800.991
0.070.670.03
0.010-1.51-0.30-1.12
-0.020-0.009
0.27%1.15%0.05%1.10%
-1.98%-0.90%-2.29%-1.98%-0.90%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
26.25226.214
8.9188.388
17.33417.825
26.78226.800
8.8468.235
17.93718.566
0.5310.587
-0.072-0.1540.6020.740
2.02%2.24%
-0.81%-1.83%3.48%4.15%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
175.25175.00
87.6387.50
115.72119.00
87.2287.0050.7052.00
1.511.47
178.88179.00
89.4489.50
119.80124.00
89.0489.0052.9555.00
1.491.44
3.634.001.812.004.085.001.822.002.253.00
-0.02-0.03
2.07%2.29%2.07%2.29%3.52%4.20%2.09%2.30%4.44%5.77%
-1.41%-1.84%
126 VM/ESA 2.1.0 Performance Report
CMS Virtual Machine Modes
Table 46 (Page 2 of 3). Migration from 370 mode to XA mode for VM/ESA 1.2.2 onthe 9121-480
VM ModeReleaseRun ID
370VM/ESA 2.2
L27E190A
XAVM/ESA 2.2
L27E190BDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2572KB400KB
8355326
29.153800.95
1235
2572KB400KB
8455316
29.153880.95
1244
0KB0KB
1-100.0
80.00
9
0.00%0.00%1.20%
-0.02%-0.02%0.15%
-0.15%0.73%
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
649436
16.253183.600
2.75000
0.0008.898
628444
16.050180.700
2.70600
0.0008.953
-218
-0.202-2.900-0.045
00
0.0000.056
-3.24%1.83%
-1.24%-1.58%-1.63%
nanana
0.63%
QueuesDISPATCH LISTELIGIBLE LIST
39.220.00
40.440.00
1.220.00
3.12%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
67810.156
3785.6622.912
19.40042.4
0.0192
9.69181
0.93
67710.136
3695.5252.819
19.80041.5
0.0192
9.54181
0.94
-1-0.020
-9-0.137-0.0930.400
-0.90.0
0-0.15
00.01
-0.15%-0.19%-2.38%-2.43%-3.18%2.06%
-2.22%na
0.00%-1.55%0.00%1.08%
Additional Evaluations 127
CMS Virtual Machine Modes
The following table shows the performance cost when migrating from a 370mode virtual machine to an XC mode virtual machine. External response timeincreased by 18.6% and internal throughput decreased by 3.2%. Relative to XAmode, ITR decreased by an additional 1.2% (3.2% - 2.0%). This is mostly due topathlength increases in the CMS interrupt handlers to save and restore accessregisters.
Table 46 (Page 3 of 3). Migration from 370 mode to XA mode for VM/ESA 1.2.2 onthe 9121-480
VM ModeReleaseRun ID
370VM/ESA 2.2
L27E190A
XAVM/ESA 2.2
L27E190BDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.78428.318
2.3960.7511.1270.0251.2481.1813.5912.822
13.47454.57036.01656.098
13.88327.980
2.3960.7491.1260.0241.2481.1373.5882.833
13.72553.78134.95749.558
0.100-0.3380.000
-0.001-0.0010.0000.000
-0.044-0.0030.0110.251
-0.789-1.059-6.539
0.72%-1.19%0.00%
-0.17%-0.05%-0.19%0.00%
-3.71%-0.08%0.39%1.86%
-1.45%-2.94%
-11.66%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5513.86971.47302.3967
1.181
5513.81791.43902.3789
1.137
0-0.0518-0.0340-0.0178
-0.044
0.00%-1.34%-2.31%-0.74%-3.75%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
128 VM/ESA 2.1.0 Performance Report
CMS Virtual Machine Modes
Table 47 (Page 1 of 2). Migration from 370 mode to XC mode for VM/ESA 1.2.2 onthe 9121-480
VM ModeReleaseRun ID
370VM/ESA 2.2
L27E190A
XCVM/ESA 2.2
L27E1909Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1290.4100.3170.2760.2350.327
0.1300.4560.3450.3070.2720.388
0.0010.0460.0280.0310.0370.061
0.78%11.22%
8.83%11.13%15.50%18.63%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1458.1966.760.87276.1933.2248.741.0001.000
26.1559.3466.670.89073.7632.8547.650.9680.989
0.011.15
-0.090.018-2.42-0.37-1.09
-0.032-0.011
0.04%1.98%
-0.13%2.11%
-3.18%-1.11%-2.23%-3.18%-1.11%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
26.25226.214
8.9188.388
17.33417.825
27.11327.147
9.0098.399
18.10418.748
0.8620.9330.0910.0110.7700.923
3.28%3.56%1.02%0.13%4.44%5.18%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
175.25175.00
87.6387.50
115.72119.00
87.2287.0050.7052.00
1.511.47
180.77181.00
90.3990.50
120.71125.00
89.9690.0053.3555.00
1.501.45
5.526.002.763.004.996.002.753.002.653.00
-0.02-0.02
3.15%3.43%3.15%3.43%4.31%5.04%3.15%3.45%5.22%5.77%
-1.11%-1.54%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2572KB400KB
8355326
29.153800.95
1235
2572KB400KB
8455084
29.054060.95
1313
0KB0KB
1-242-0.1
260.00
78
0.00%0.00%1.20%
-0.44%-0.44%0.48%
-0.48%6.32%
Additional Evaluations 129
CMS Virtual Machine Modes
Table 47 (Page 2 of 2). Migration from 370 mode to XC mode for VM/ESA 1.2.2 onthe 9121-480
VM ModeReleaseRun ID
370VM/ESA 2.2
L27E190A
XCVM/ESA 2.2
L27E1909Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
649436
16.253183.600
2.75000
0.0008.898
606445
15.763181.700
2.72500
0.0008.954
-439
-0.489-1.900-0.025
00
0.0000.056
-6.63%2.06%
-3.01%-1.03%-0.91%
nanana
0.63%
QueuesDISPATCH LISTELIGIBLE LIST
39.220.00
41.980.00
2.760.00
7.03%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
67810.156
3785.6622.912
19.40042.4
0.0192
9.69181
0.93
67110.064
3935.8943.169
19.20041.4
0.0191
9.55180
0.94
-7-0.092
150.2320.257
-0.200-1.10.0-1
-0.14-1
0.01
-1.03%-0.91%3.97%4.10%8.83%
-1.03%-2.49%
na-0.52%-1.44%-0.55%1.08%
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.78428.318
2.3960.7511.1270.0251.2481.1813.5912.822
13.47454.57036.01656.098
13.90627.861
2.4010.7521.1260.0251.2481.0813.5902.823
13.66554.07034.60549.495
0.122-0.4570.0050.001
-0.0010.0000.000
-0.100-0.0010.0010.191
-0.500-1.411-6.603
0.89%-1.61%0.19%0.16%
-0.06%0.49%0.02%
-8.47%-0.04%0.02%1.42%
-0.92%-3.92%
-11.77%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5513.86971.47302.3967
1.181
5593.79961.44152.3581
1.081
8-0.0701-0.0315-0.0386
-0.101
1.45%-1.81%-2.14%-1.61%-8.54%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
130 VM/ESA 2.1.0 Performance Report
370 Accommodation
370 AccommodationThis section documents the results of a measurement made to observe theeffects of running with 370 accommodation mode set on. A measurement wasmade where CMS370AC was set on for all user virtual machines. This alsocauses 370ACCOM to be implicitly set on for CP. This measurement was thencompared to a measurement without CMS370AC set on.
Workload: FS8F0R
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage:
Real: 256MB (default MDC)Expanded: 0MB
Tape: 3480 (Monitor)
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 6 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
Control Unit NumberLines per
Control Unit Speed
3088-08 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
SMART 1 RTM 16MB/370 3 % 400 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XC 10000 512 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 1900 Users 3MB/XC 100
Additional Evaluations 131
370 Accommodation
Measurement Discussion: The following table shows that there is little effect tosystem performance for this workload when 370 accommodation mode is turnedon and not actually required. The key indicators of external response time (AVGLAST(T)) and internal throughput rate (ITR(H)) both show variations that arewithin normal run variation. The external response time improved by 1.3% andthe internal throughput decreased by 0.2%.
The most significant effect is the addition of a Diagnose code X′ 268′ when CMSis processing interrupts. This suggests that there could be a measureable effecton system performance for workloads that have high interrupt rates. It is worthnoting that when only the CP portion of the 370 accommodation facility is used(that is, 370ACCOM ON but CMS370AC OFF), these diagnose calls do not occurand performance is therefore not sensitive to the interrupt rate.
There is a performance cost associated with changing CMS virtual machinesfrom 370 mode to XA or XC mode. Measurements on VM/ESA 1.2.2 using thissame CMS-intensive workload showed a 2.0% ITR decrease when all uservirtual machines were switched from 370 mode to XA mode and a 3.2% ITRdecrease when the user machines were switched from 370 mode to XC mode.Refer to “CMS Virtual Machine Modes” on page 125 for additional information.
Table 48 (Page 1 of 3). 370 accommodation mode comparison on 9121-480
CMS370ACReleaseRun ID
OFF2.1.0
L28E190D
ON2.1.0
L28E190HDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1260.3650.2860.2510.2230.305
0.1250.3630.2840.2490.2180.302
-0.001-0.002-0.002-0.002-0.005-0.004
-0.79%-0.55%-0.70%-0.81%-2.24%-1.31%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1858.5766.820.87779.1234.6951.431.0001.000
26.1758.5266.830.87678.9934.5951.570.9980.997
0.00-0.050.02
-0.001-0.13-0.100.14
-0.002-0.003
0.00%-0.09%0.02%
-0.11%-0.16%-0.29%0.27%
-0.16%-0.29%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
25.27925.293
8.8058.231
16.47417.061
25.31925.287
8.8988.379
16.42116.908
0.041-0.0060.0930.148
-0.052-0.154
0.16%-0.02%1.06%1.79%
-0.32%-0.90%
132 VM/ESA 2.1.0 Performance Report
370 Accommodation
Table 48 (Page 2 of 3). 370 accommodation mode comparison on 9121-480
CMS370ACReleaseRun ID
OFF2.1.0
L28E190D
ON2.1.0
L28E190HDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
168.90169.00
84.4584.50
110.07114.00
83.8984.0048.2550.00
1.531.48
169.22169.00
84.6184.50
109.75113.00
83.9484.0047.9050.00
1.541.50
0.310.000.160.00
-0.32-1.000.050.00
-0.350.000.010.01
0.18%0.00%0.18%0.00%
-0.29%-0.88%0.06%0.00%
-0.72%0.00%0.48%0.88%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2756KB400KB
8254806
28.856570.95
1383
2756KB400KB
8354703
28.857450.94
1349
0KB0KB
1-103-0.1
88-0.01
-34
0.00%0.00%1.22%
-0.19%-0.19%1.56%
-1.53%-2.46%
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
638439
16.119170.800
2.55600
0.0008.456
637443
16.160171.100
2.56000
0.0008.454
-14
0.0410.3000.004
00
0.000-0.002
-0.16%0.91%0.26%0.18%0.15%
nanana
-0.02%
QueuesDISPATCH LISTELIGIBLE LIST
36.880.00
38.570.00
1.700.00
4.60%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
69510.402
3655.4632.906
19.70041.4
0.0207
9.58196
0.94
70110.489
3665.4762.916
19.80041.2
0.0207
9.60196
0.94
60.087
10.0140.0100.100
-0.20.0
00.02
00.00
0.86%0.84%0.27%0.25%0.34%0.51%
-0.56%na
0.00%0.21%0.00%0.00%
Additional Evaluations 133
370 Accommodation
Table 48 (Page 3 of 3). 370 accommodation mode comparison on 9121-480
CMS370ACReleaseRun ID
OFF2.1.0
L28E190D
ON2.1.0
L28E190HDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDDIAG 268/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.86726.506
2.4830.7551.1260.0251.2491.1903.8052.835
11.8000.000
53.10035.57749.882
13.93927.651
2.4810.7551.1250.0251.3341.1993.8062.834
10.9601.885
55.19737.53449.916
0.0721.146
-0.0020.000
-0.0010.0000.0850.0080.001
-0.001-0.8401.8852.0971.9570.033
0.52%4.32%
-0.08%0.05%
-0.10%-0.68%6.84%0.71%0.02%
-0.05%-7.12%
na3.95%5.50%0.07%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5074.06581.48002.5858
1.191
5274.02331.49632.5270
1.199
20-0.04250.0163
-0.05880.009
3.94%-1.05%1.10%
-2.27%0.74%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
134 VM/ESA 2.1.0 Performance Report
Storage Constrained VSE Guest using MDC
Storage Constrained VSE Guest using MDCThis section examines the use of minidisk cache (MDC) in a storage constrainedconfiguration.
Figure 13. Storage constrained VSE guest measurements: external throughput.
In this series of measurements, 32MB of central storage was configured. Thefirst measurement point was a V=V guest with dedicated DASD configured, sothat no MDC was used. The resultant ETR was 7.09. The next measurementpoint used a V=V guest with MDC. In this case the VM/ESA system residenceDASD had 100 cylinders of page space allocated, of which 100% was usedduring the measurement. A loss of about 62% in ETR occurred. By adding 3page volumes to the system, the ETR more than doubled. This is the result offewer I/Os due to more efficient block paging and lower I/O access times due toa lower request rate per paging device. In the final measurement point, the CPSET RESERVE command was specified for the guest (size based on the residentstorage pages). SET RESERVE reduces the serial page faults in the guestmachine. This resulted in an ETR increase of over 16% relative to the previouscase. Overall, the use of additional page volumes and SET RESERVE resulted ina net 4.5% ETR improvement relative to the non-MDC base case.
Additional Evaluations 135
Storage Constrained VSE Guest using MDC
Figure 14. Storage constrained VSE guest measurements: internal throughput.
Figure 14 shows the corresponding ITRs for the four measurements describedabove. As a result of heavy paging, the ITR for the last measurement scenariodid not exceed the measurement point where MDC was not used. Additionalreal storage would somewhat benefit the guest measurement where MDC is notbeing used, but greatly benefit the cases where MDC is used.
136 VM/ESA 2.1.0 Performance Report
Storage Constrained VSE Guest using MDC
Workload: DYNAPACE
Hardware Configuration
Processor models: 9121-32018
StorageReal: 128MBExpanded: 0MB
DASD:
Software Configuration
VSE version: 2.1.0
Virtual Machines:
For all guest measurements in this section, VSE/ESA was run in a V=V virtualmachine. The VM system used for these guest measurements has a 96MB V=Rarea defined. For these V=V measurements, the V=R area is configured, butnot used. Therefore, if the real storage configuration on the processor is 128MB,then 32MB of useable storage is available for the VM system and V=V guest. Itis this effective real storage size that is shown in this section ′ s measurementresults tables.
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK VSAM VSE Sys. VM Sys.
3380-A 3880-03 2 13390-2 3990-02 4 10 23380-K 3990-03 4 10
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
VSEVV 1 VSE V=V 96MB/ESA 100 IOASSIST OFF
SMART 1 RTM 16MB/370 100WRITER 1 CP monitor 2MB/XA 100
18 See “Hardware” on page 21 for an explanation of how this processor model was defined.
Additional Evaluations 137
Storage Constrained VSE Guest using MDC
Table 49 (Page 1 of 2). VSE/ESA storage constrained environment on the 9121-320
MDCPage VolumesSet ReserveVM/ESA ReleaseRun ID
NO1
NO2.1.0
L1V88PF3
YES1
NO2.1.0
L1V88PF4
YES4
NO2.1.0
L1V88PF1
YES4
YES2.1.0
L1V88PF2
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA32MB
0MBESA96M
V = VESA
1
ESA32MB
0MBESA96M
V = VESA
1
ESA32MB
0MBESA96M
V = VESA
1
ESA32MB
0MBESA96M
V = VESA
1
Throughput (Min)Elapsed Time (C)ETR (C)ITR (H)ITRITRR (H)ITRR
948.07.09
12.5012.441.0001.000
2470.02.72
11.0810.880.8860.875
1057.06.35
11.7311.770.9390.947
907.07.41
11.9411.950.9550.961
Proc. Usage (Sec)PBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
4.8004.8251.2481.1853.5523.640
5.4185.5131.8641.7643.5533.749
5.1095.0961.5891.4163.5203.681
5.0245.0211.5081.3773.5163.644
Processor Util.TOTAL (H)TOTALTOTAL EMUL (H)TOTAL EMULTVR(H)TVR
56.7157.0041.9643.00
1.351.33
24.5725.0016.1117.00
1.521.47
54.1354.0037.3039.00
1.451.38
62.0462.0043.4245.00
1.431.38
StorageNUCLEUS SIZE (V)TRACE TABLE (V)PGBLPGSFREEPGSFREE UTILSHRPGS
2776KB200KB
616397
0.5934
2776KB200KB
5937100
0.5719
2776KB200KB
6153104
0.5813
2776KB200KB
6146107
0.5920
PagingPAGE/CMDXSTOR/CMDFAST CLR/CMD
220.0710.000
110.036
2844.9110.000
286.696
2784.0620.000
235.937
1643.9380.000
242.946
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)DASD IO TOTAL (V)DASD IO RATE (V)DASD IO/CMD (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
369.0003123.321
370.0003131.786
354980369.77
3129.852.20.0
0.030.010.000.00
141.0003109.554
146.0003219.821
358698145.81
3215.6816.3
0.0864248
0.52
330.0003114.375
266.0002510.375
286647265.41
2504.8416.0
0.0201
98119
0.55
386.0003125.911
296.0002397.071
265495294.99
2388.9316.3
0.0235115134
0.53
138 VM/ESA 2.1.0 Performance Report
Storage Constrained VSE Guest using MDC
Table 49 (Page 2 of 2). VSE/ESA storage constrained environment on the 9121-320
MDCPage VolumesSet ReserveVM/ESA ReleaseRun ID
NO1
NO2.1.0
L1V88PF3
YES1
NO2.1.0
L1V88PF4
YES4
NO2.1.0
L1V88PF1
YES4
YES2.1.0
L1V88PF2
EnvironmentIML ModeReal StorageExp. StorageVM ModeVM SizeGuest SettingVSE SupervisorProcessors
ESA32MB
0MBESA96M
V = VESA
1
ESA32MB
0MBESA96M
V = VESA
1
ESA32MB
0MBESA96M
V = VESA
1
ESA32MB
0MBESA96M
V = VESA
1
PRIVOPsPRIVOP/CMD (R)DIAG/CMD (R)SIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
3125.002652.286
13915.28611688.840
3758.143
3126.4601212.099
15503.66113023.075
6990.982
3122.455686.429
14458.25011855.765
6134.375
3123.384627.402
14171.87511904.375
5976.482
Note: V=VMPRF, H=Hardware Moni to r , C=VSE conso le , Unmarked=RTM
Additional Evaluations 139
RSCS 3.2
RSCS 3.2This section describes the performance test activity for RSCS Version 3 Release2. Measurement data were collected for RSCS Version 3 Release 2 andcompared to RSCS Version 3 Release 1. These products are referred to asRSCS 3.2 and RSCS 3.1 throughout this section.
RSCS 3.2 supports 31 bit addressing which allows storage to be used above the16M line. This will alleviate storage problems which required some customersto have multiple RSCS machines defined to their networks. RSCS 3.2 also hasTCP/IP support for four new line drivers.
RSCS MethodologyThe RSCS throughput measurements involved transferring a fixed amount ofdata between two systems and measuring the elapsed time and processor usagerequired to complete the transfer. Measurements were made while both sendingand receiving data simultaneously.
The number of files transferred was held constant for all of the configurations.For each of the line drivers, 50 copies of 9 different files were sent, for a total of450 files (see the ″RSCS Workload″ section for more details). This amount ofwork took about 14 minutes to complete.
Multiple measurements were made to ensure repeatability. Threemeasurements were done for each type of data transfer within eachconfiguration and the results were averaged. This was done to improve theaccuracy of the CPU data reported by the INDICATE USER command.
Execs were written to handle the simultaneous sending and receiving of filesbetween both systems. The system that originally sends the files and laterreceives them is designated as System A and the other system that receives andresends as System B. Only System A was measured.
An exec was executed from a CMS user on System A to generate the correctnumber of spool files, in the desired order, and to send them to the remote user.Before sending the files, the RSCS HOLD <linkid> command was issued toensure that no transmission activity took place before all of the files had beenreceived by RSCS. The SENDFILE command was used to send the files. After allthe files had been received by RSCS, the RSCS FREE <linkid> command wasissued to initiate the transmission and begin the measurement. At that time, theexec entered into a loop to issue the 50 SMSG commands, one every 4 seconds.On System B, the user that was to receive the files was running an exec thatused the WAKEUP MODULE when a RDR file was received. When a file arrived,it was received to the user ′ s A-disk and then sent back to RSCS and, from there,to the original user on System A who sent the files. The measurementcompleted when all the files had been received by the original user.
140 VM/ESA 2.1.0 Performance Report
RSCS 3.2
RSCS WorkloadThe RSCS throughput workload was built from multiple copies of a set of ninefiles and one SMSG command. The sizes of the files were 50 records, 200records, and 1500 records. Each record was 80 characters in length. Thecompressibility of the files was also varied. RSCS compression of files is animportant factor in the transmission time on TP links. In general, a record iscompressible by RSCS if it contains strings of duplicate characters. The ninefiles are described below in the order that they are initially sent:
1500 records, noncompress200 records, semicompress1500 records, compress50 records, semicompress
1500 records, semicompress50 records, compress50 records, noncompress200 records, compress200 records, noncompress
The records in the above files are as follows:
noncompress: 1234567890 (repeated 7 more times)semicompress: 12345678901111111111 (repeated 3 more times)compress: 11111111112222222222 ... 8888888888
The SMSG command was a query of the receiving RSCS virtual machine ′ s activelinks and should pass immediately over the network.
SMSG command: SMSG <rscs1> CMD <rscs2> Q SYS A
SYSTEM ConfigurationAll of the RSCS performance measurements were made on a 9121-742 physicallypartitioned into two systems, each having two processors, 512MB of real storage,and 512MB of expanded storage. This resulted in a nonpaging environment forall measured cases. The measured system was connected to the other systemvia a 4.5 megabyte/sec 3088 (CTCA).
VM/ESA 2.1.0, VTAM 3.4.1, and TCP/IP 2.3 were used on both systems. RSCS 3.2was at a pre-release level.
The following parameters were used for the VTAM virtual machines for bothrelease levels of the SNANJE measurements.
VTAM Virtual Machine:- Virtual Storage Size: 64MB- QUICKDSP ON- RESERVE 512- SHARE 10000- Directory: OPTION ... DIAG98
Additional Evaluations 141
RSCS 3.2
The following parameters were used for the RSCS virtual machines for bothRSCS 3.1 and RSCS 3.2. All line drivers were run with the default number ofstreams (2) defined.
RSCS Virtual Machine:- Virtual Storage Size: 24MB- QUICKDSP ON
The following parameters were used for the TCP/IP virtual machine.
TCP/IP Virtual Machine:- Virtual Storage Size: 64MB- QUICKDSP ON- Directory: OPTION ... DIAG98
RSCS 3.2 was at a pre-release level.
NJE Line Driver ResultsThe NJE line driver runs under the Group Control System (GCS). It provides VMwith both Binary Synchronous Communication (BSC) and Channel to ChannelAdapter (CTCA) line protocols to communicate with VM and non-VM NJEsystems. It uses multi-streaming to transfer multiple files concurrently over thesame link.
Below is the configuration used by the NJE line driver:
SYSTEM A SYSTEM B
┌──────┐ ┌──────┐│ RSCS │ │ RSCS ││ NJE ├───────── 3088 ─────────┤ NJE │
┌──────┐ ├──────┤ ├──────┤ ┌──────┐│ CMS │ │ GCS │ │ GCS │ │ CMS ││ USER │ │ │ │ │ │ USER │├──────┴───┴──────┤ ├──────┴───┴──────┤│ CP │ │ CP │└─────────────────┘ └─────────────────┘
Measurement Discussion: The following table shows the throughput resultswhen using the default buffer size of 4KB. The total and virtual CPU consumptionwas equivalent to RSCS 3.1. The communication I/Os (Non-spool I/Os) waswithin 0.4 % of RSCS 3.1.
Table 50. RSCS NJE 3088
Release RSCS 3.1 RSCS 3.2 Difference %Difference
TOTAL COSTETRElapsed Time (sec)
0.524858
0.522862
-0.0024
-0.38%0.47%
RSCSTotal CPU Sec (I)Virtual CPU Sec (I)Non-Spool I/Os (I)
29.0010.00
23255
29.0010.00
23336
0.000.00
81
0.00%0.00%0.35%
Note: I=INDICATE USER
142 VM/ESA 2.1.0 Performance Report
RSCS 3.2
SNANJE Line Driver ResultsThe SNANJE line driver enables RSCS to be part of a SNA network. It runs inconjunction with GCS and VM/VTAM to provide native SNA support. It also usesmulti-streaming to transfer multiple files concurrently over the same link. TheSynchronous Data Link Control (SDLC) line protocol and the CTCA protocol aresupported for SNANJE. Only the CTCA line protocol was tested.
Below is the configuration used by the SNANJE line driver:
SYSTEM A SYSTEM B
┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐│ RSCS │ │ VTAM ├── 3088 ──│ VTAM │ │ RSCS ││SNANJE│ │ │ │ │ │SNANJE│
┌──────┐ ├──────┤ ├──────┤ ├──────┤ ├──────┤ ┌──────┐│ CMS │ │ GCS │ │ GCS │ │ GCS │ │ GCS │ │ CMS ││ USER │ │ │ │ │ │ │ │ │ │ USER │├──────┴─┴──────┴─┴──────┤ ├──────┴─┴──────┴─┴──────┤│ CP │ │ CP │└────────────────────────┘ └────────────────────────┘
Measurement Discussion: The following table shows the throughput resultswhen using the default buffer size of 1KB. The section of the tables calledTOTAL COST is the sum of the RSCS virtual machine and the VTAM virtualmachine. This allows the total cost of executing the workload to be compared.In terms of total cost, CPU usage and communication I/Os (Non-spool I/Os) werewithin 1% of RSCS 3.1.
Table 51. RSCS SNANJE 3088
Release RSCS 3.1 RSCS 3.2 Difference %Difference
TOTAL COSTETRElapsed Time (sec)Total CPU (I)Virtual CPU (I)Non-Spool I/Os (I)
0.530849
35.7016.67
15146
0.530849
36.0016.63
15246
0.0000
0.30-0.04
100
0.00%0.00%0.84%
-0.24%0.66%
RSCSTotal CPU (I)Virtual CPU (I)Non-Spool I/Os (I)
28.7013.00
6
29.0013.30
8
0.300.30
2
1.05%2.31%
33.33%
VTAMTotal CPU (I)Virtual CPU (I)Non-Spool I/Os (I)
7.003.67
15141
7.003.33
15238
0.00-0.34
97
0.00%-9.26%0.64%
Note: I=INDICATE USER
TCPNJE Line Driver ResultsThe TCPNJE line driver enables RSCS to be part of a TCP/IP network. The CTCAline protocol was also used to measure the TCP/IP environment.
Additional Evaluations 143
RSCS 3.2
Below is the configuration used by the TCPNJE line driver:
SYSTEM A SYSTEM B
┌──────┐ ┌──────┐ ┌──────┐ ┌──────┐│ RSCS │ │TCP/IP├── 3088 ──│TCP/IP│ │ RSCS ││TCPNJE│ │ │ │ │ │TCPNJE│
┌──────┐ ├──────┤ ├──────┤ ├──────┤ ├──────┤ ┌──────┐│ CMS │ │ GCS │ │ CMS │ │ CMS │ │ GCS │ │ CMS ││ USER │ │ │ │ │ │ │ │ │ │ USER │├──────┴─┴──────┴─┴──────┤ ├──────┴─┴──────┴─┴──────┤│ CP │ │ CP │└────────────────────────┘ └────────────────────────┘
Measurement Discussion: The following table shows the throughput resultswhen using the default buffer sizes (1KB for SNANJE and 4KB for TCPNJE). Thetable compares the SNANJE line driver to the TCPNJE line driver. The section ofthe table called TOTAL COST is the sum of RSCS and VTAM or TCP/IP. Thetotal CPU consumption was 20.0% higher for the TCP/IP environment. This ismostly due to the TCP/IP virtual machine. The RSCS total CPU is 3.5% higherand the virtual CPU is 2.3% lower.
Table 52. RSCS TCPNJE 3088
Line DriverRelease
SNANJERSCS 3.2
TCPNJERSCS 3.2 Difference %Difference
TOTAL COSTETRElapsed Time (sec)Total CPU (I)Virtual CPU (I)Non-Spool I/Os (I)
0.530849
36.0016.63
15246
0.521864
43.2519.50
19419
-0.00915
7.252.87
4173
-1.70%1.77%
20.01%17.26%27.37%
RSCSTotal CPU (I)Virtual CPU (I)Non-Spool I/Os (I)
29.0013.30
8
30.0013.00
6
1.00-0.30
-2
3.45%-2.26%
-25.00%
VTAM or TCP/IPTotal CPU (I)Virtual CPU (I)Non-Spool I/Os (I)
7.003.33
15238
13.256.50
19413
6.253.17
4175
89.29%95.20%27.40%
Note: I=INDICATE USER
144 VM/ESA 2.1.0 Performance Report
DirMaint 1.5
DirMaint 1.5This section documents the results of measurements made comparing DirMaintRelease 1.4 to DirMaint Release 1.5. The two releases of DirMaint were run onVM/ESA 1.2.2 with the CMS-intensive workload (FS8F0R) running with theprocessors at approximately 80% busy. All DirMaint commands were issuedfrom a separate virtual machine.
DirMaint has been rewritten to incorporate many outstanding customerrequirements. DirMaint 1.5 has been implemented mostly in REXX with theexecs being provided both in source form and compiled. Customers with theREXX run time library can use this compiled version. Two sets of measurementshave been made to show the performance of both the uncompiled and compiledversions.
Workload: FS8F0R + DIR001
The workload consists of the FS8F0R CMS-intensive workload along with aDirMaint user (DIR001) and DirMaint server. The user machine issued DirMaintcommands throughout the measurement period. These commands are:
DIRM AUTHDIRM DROPFDIRM HELPDIRM REVDIRM PWDIRM PWDIRM PW ?DIRM TERMDIRM ACCOUNTDIRM ACCOUNTDIRM IPLDIRM MDISKDIRM MDPW
These commands were selected because they were considered to be the morecommonly used DirMaint commands. There was no study done to determine ifthis mix of user activity is representative of what might typically be executed.The rate of DirMaint command execution was set such that the DirMaint 1.4server CPU utilization matched that of a local VM/ESA production system (about0.1%).
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage:
Real: 256MB (default MDC)Expanded: 0MB
Tape: 3480 (Monitor)
Additional Evaluations 145
DirMaint 1.5
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Measurement Discussion: The following table shows the system effects ofmigrating from DirMaint Release 1.4 to DirMaint Release 1.5, without the use ofthe compiled REXX execs. The external response time (AVG LAST(T)) increasedby 4.2% and the internal throughput rate (ITR(H)) decreased by 4.1%. Thesedeceases in the key performance indicators can be attributed to a large increasein processor usage for the DirMaint server machine and an increase inprocessor usage for the DirMaint user machine. The DirMaint server machinehas been implemented in REXX, replacing code that had been mostly written inassembler language. High level, interpretive languages consume moreresources than assembler languages.
The increase in DASD I/O RATE (V) for the DirMaint server reflects the fact thatDirMaint 1.4 keeps its data in virtual storage while DirMaint 1.5 keeps its data ina file on DASD.
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 6 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
Control Unit NumberLines per
Control Unit Speed
3088-08 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
DIRMAINT 1 Server 16MB/XC 100DIR001 1 User 3MB/XC 100SMART 1 RTM 16MB/370 3 % 400 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XC 10000 560 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 1650 Users 3MB/XC 100
146 VM/ESA 2.1.0 Performance Report
DirMaint 1.5
Table 53 (Page 1 of 3). Migration from DirMaint 1.4 on the 9121-480
DirMaint ReleaseVM/ESA ReleaseRun ID
1.41.2.2
L27D1650
1.51.2.2
L27D1654Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1650
102
256MB0MB1650
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1140.3220.2530.2210.1970.263
0.1150.3420.2660.2330.2040.274
0.0010.0200.0130.0120.0070.011
0.88%6.21%5.14%5.53%3.55%4.18%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1750.8558.310.87273.6232.1146.641.0001.000
26.2451.0158.270.87570.5930.9044.240.9590.962
0.080.16
-0.030.003-3.03-1.21-2.40
-0.041-0.038
0.29%0.31%
-0.06%0.37%
-4.11%-3.77%-5.15%-4.11%-3.77%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
27.16827.098
9.0478.404
18.12118.694
28.33328.315
9.1658.580
19.16819.735
1.1651.2170.1180.1771.0471.041
4.29%4.49%1.31%2.10%5.78%5.57%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
158.41158.00
79.2179.00
105.66109.00
78.4978.0045.8447.00
1.501.45
165.11165.00
82.5582.50
111.70115.00
81.8382.0048.7150.00
1.481.43
6.707.003.353.506.046.003.344.002.883.00
-0.02-0.01
4.23%4.43%4.23%4.43%5.71%5.50%4.25%5.13%6.27%6.38%
-1.41%-1.02%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2572KB400KB
8456071
34.047880.96
1238
2572KB400KB
8456037
34.048170.96
1288
0KB0KB
0-340.029
-0.0150
0.00%0.00%0.00%
-0.06%-0.06%0.61%
-0.60%4.04%
Additional Evaluations 147
DirMaint 1.5
Table 53 (Page 2 of 3). Migration from DirMaint 1.4 on the 9121-480
DirMaint ReleaseVM/ESA ReleaseRun ID
1.41.2.2
L27D1650
1.51.2.2
L27D1654Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1650
102
256MB0MB1650
102
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
579388
16.584156.700
2.68700
0.0008.901
583393
16.749159.400
2.73500
0.0009.061
45
0.1642.7000.048
00
0.0000.160
0.69%1.29%0.99%1.72%1.78%
nanana
1.79%
QueuesDISPATCH LISTELIGIBLE LIST
31.740.00
32.980.00
1.250.00
3.92%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
60910.445
3435.8833.195
19.40042.5
0.0169
8.47159
0.93
61310.519
3495.9893.254
19.40041.9
0.0170
8.63160
0.93
40.075
60.1060.0590.000
-0.60.0
10.16
10.00
0.66%0.72%1.75%1.81%1.83%0.00%
-1.33%na
0.59%1.89%0.63%0.00%
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.88628.609
2.7260.7551.1270.0241.2501.4343.6182.809
13.71355.20736.43750.662
13.92829.413
2.7300.7701.1340.0251.2501.4183.6742.844
14.41155.97836.38550.538
0.0420.8040.0040.0150.0060.000
-0.001-0.0170.0560.0350.6980.770
-0.051-0.125
0.30%2.81%0.16%1.98%0.54%0.88%
-0.07%-1.16%1.55%1.25%5.09%1.40%
-0.14%-0.25%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5034.03991.53402.5059
1.435
5044.01361.52542.4883
1.419
1-0.0263-0.0086-0.0176
-0.016
0.20%-0.65%-0.56%-0.70%-1.12%
148 VM/ESA 2.1.0 Performance Report
DirMaint 1.5
The following table shows the system effects of migrating from DirMaint Release1.4 to DirMaint Release 1.5, with the use of the compiled REXX execs. Theexternal response time (AVG LAST(T)) increased by 4.0% and the internalthroughput rate (ITR(H)) decreased by 3.2%. These increases are less thanthose observed in the previous table. The compiled execs reduced the CPUusage of the DirMaint server machine by 14% and reduced the CPU usage of theDirMaint user machine by 27%.
The percentage improvement relative to the uncompiled version is lower thanoften seen when comparing compiled REXX to the uncompiled equivalent. Onecontributing factor is the extensive use of DASD I/O in the DirMaint server. Theprocessing associated with this I/O is the same for both the uncompiled andcompiled versions. This serves to decrease the overall percentageimprovement.
Table 53 (Page 3 of 3). Migration from DirMaint 1.4 on the 9121-480
DirMaint ReleaseVM/ESA ReleaseRun ID
1.41.2.2
L27D1650
1.51.2.2
L27D1654Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1650
102
256MB0MB1650
102
DIRMAINT ServerCPU SECONDS (V)CPU UTIL (V)RESIDENT PAGES (V)DASD I/O RATE (V)
20.1
1271.28
932.6
1945.37
912.567
4.09
4500.00%2500.00%
52.76%319.53%
DIRMAINT UserCPU SECONDS (V)CPU UTIL (V)RESIDENT PAGES (V)DASD I/O RATE (V)
30.143
0.84
300.852
0.99
270.7
90.15
900.00%700.00%
20.93%17.86%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
Additional Evaluations 149
DirMaint 1.5
Table 54 (Page 1 of 3). Migration from DirMaint 1.4 (with compiled execs) on the9121-480
DirMaint ReleaseVM/ESA ReleaseRun ID
1.41.2.2
L27D1650
1.51.2.2
L27D1655Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1650
102
256MB0MB1650
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1140.3220.2530.2210.1970.263
0.1150.3440.2670.2330.2030.274
0.0010.0220.0140.0130.0060.010
0.88%6.83%5.53%5.76%3.05%3.99%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1750.8558.310.87273.6232.1146.641.0001.000
26.1951.0358.390.87471.2431.1444.680.9680.970
0.020.180.08
0.002-2.38-0.97-1.95
-0.032-0.030
0.06%0.35%0.13%0.22%
-3.23%-3.01%-4.19%-3.23%-3.01%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
27.16827.098
9.0478.404
18.12118.694
28.07428.089
9.1128.564
18.96219.525
0.9060.9910.0650.1600.8410.831
3.33%3.66%0.71%1.90%4.64%4.45%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
158.41158.00
79.2179.00
105.66109.00
78.4978.0045.8447.00
1.501.45
163.91164.00
81.9682.00
110.71114.00
81.1681.0048.2450.00
1.481.44
5.506.002.753.005.055.002.673.002.403.00
-0.02-0.01
3.47%3.80%3.47%3.80%4.78%4.59%3.40%3.85%5.24%6.38%
-1.25%-0.76%
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2572KB400KB
8456071
34.047880.96
1238
2572KB400KB
8456068
34.047860.96
1294
0KB0KB
0-3
0.0-2
0.0056
0.00%0.00%0.00%
-0.01%-0.01%-0.04%0.04%4.52%
150 VM/ESA 2.1.0 Performance Report
DirMaint 1.5
Table 54 (Page 2 of 3). Migration from DirMaint 1.4 (with compiled execs) on the9121-480
DirMaint ReleaseVM/ESA ReleaseRun ID
1.41.2.2
L27D1650
1.51.2.2
L27D1655Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1650
102
256MB0MB1650
102
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
579388
16.584156.700
2.68700
0.0008.901
582392
16.682157.500
2.69800
0.0009.009
34
0.0980.8000.010
00
0.0000.108
0.52%1.03%0.59%0.51%0.38%
nanana
1.21%
QueuesDISPATCH LISTELIGIBLE LIST
31.740.00
32.050.00
0.320.00
1.00%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
60910.445
3435.8833.195
19.40042.5
0.0169
8.47159
0.93
61310.499
3475.9433.246
19.60042.6
0.0171
8.62161
0.93
40.054
40.0610.0510.200
0.10.0
20.15
20.00
0.66%0.52%1.17%1.03%1.58%1.03%0.13%
na1.18%1.77%1.26%0.00%
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.88628.609
2.7260.7551.1270.0241.2501.4343.6182.809
13.71355.20736.43750.662
13.88029.282
2.7400.7671.1340.0241.2491.4053.6842.832
14.27355.76736.24850.731
-0.0060.6740.0140.0120.0070.000
-0.001-0.0290.0660.0230.5600.559
-0.1880.069
-0.04%2.35%0.51%1.58%0.59%0.04%
-0.09%-2.06%1.82%0.82%4.08%1.01%
-0.52%0.14%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5034.03991.53402.5059
1.435
5054.00591.51292.4930
1.405
2-0.0340-0.0211-0.0129
-0.030
0.40%-0.84%-1.38%-0.51%-2.07%
Additional Evaluations 151
DirMaint 1.5
Table 54 (Page 3 of 3). Migration from DirMaint 1.4 (with compiled execs) on the9121-480
DirMaint ReleaseVM/ESA ReleaseRun ID
1.41.2.2
L27D1650
1.51.2.2
L27D1655Difference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1650
102
256MB0MB1650
102
DIRMAINT ServerCPU SECONDS (V)CPU UTIL (V)RESIDENT PAGES (V)DASD I/O RATE (V)
20.1
1271.28
802.2
2735.66
782.1
1464.38
3900.00%2100.00%
114.96%342.19%
DIRMAINT UserCPU SECONDS (V)CPU UTIL (V)RESIDENT PAGES (V)DASD I/O RATE (V)
30.143
0.84
220.673
1.21
190.530
0.37
633.33%500.00%
69.77%44.05%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
152 VM/ESA 2.1.0 Performance Report
VTAM 4.2.0
VTAM 4.2.0This section documents the results of measurements made to observe theeffects of migrating from VTAM 3.4.1 to VTAM 4.2.0 on VM/ESA 2.1.0 for aCMS-intensive workload.
Workload: FS8F0R
Hardware Configuration
Processor model: 9121-480Processors used: 2Storage:
Real: 256MB (default MDC)Expanded: 0MB
Tape: 3480 (Monitor)
DASD:
Note: R or W next to the DASD counts means basic cache enabled or DASD fastwrite (and basic cache) enabled, respectively.
Communications:
Software Configuration
Driver: TPNSThink time distribution: BactrianCMS block size: 4KB
Virtual Machines:
Type ofDASD
ControlUnit
Numberof Paths PAGE SPOOL
- Number of Volumes -TDSK User Server System
3390-2 3990-2 4 16 6 63390-2 3990-3 2 2 R3390-2 3990-3 4 2 2 16 R
Control Unit NumberLines per
Control Unit Speed
3088-08 1 NA 4.5MB
VirtualMachine Number Type
MachineSize/Mode SHARE RESERVED Other Options
SMART 1 RTM 16MB/370 3 % 400 QUICKDSP ONVTAMXA 1 VTAM/VSCS 64MB/XC 10000 560 QUICKDSP ONWRITER 1 CP monitor 2MB/XA 100 QUICKDSP ONUnnnn 1900 Users 3MB/XC 100
Additional Evaluations 153
VTAM 4.2.0
Measurement Discussion: The following table shows the performance results.External response time (AVG LAST(T)) was equivalent, within measurementvariability. Internal throughput (ITR(H)) improved by 0.6%. This improvementwas mainly due to a 3.3% drop in CPU usage by the VTAM virtual machine.
Table 55 (Page 1 of 3). Migration from VM/VTAM 3.4.1 to VM/VTAM 4.2.0 usingVM/ESA 2.1.0 on the 9121-480
VTAM ReleaseVM ReleaseRun ID
3.4.12.1.0
L28E190N
4.2.02.1.0
L28E190PDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
Response TimeTRIV INTNONTRIV INTTOT INTTOT INT ADJAVG FIRST (T)AVG LAST (T)
0.1240.3670.2870.2510.2190.303
0.1240.3630.2840.2490.2260.309
0.000-0.004-0.003-0.0020.0070.006
0.00%-1.09%-1.05%-0.71%3.43%1.98%
ThroughputAVG THINK (T)ETRETR (T)ETR RATIOITR (H)ITREMUL ITRITRR (H)ITRR
26.1458.5466.940.87479.0334.5751.631.0001.000
26.1458.4066.560.87779.5234.9252.181.0061.010
0.00-0.14-0.380.003
0.490.350.55
0.0060.010
-0.02%-0.24%-0.57%0.33%0.62%1.01%1.06%0.62%1.01%
Proc. UsagePBT/CMD (H)PBT/CMDCP/CMD (H)CP/CMDEMUL/CMD (H)EMUL/CMD
25.30725.246
8.9408.365
16.36616.880
25.14925.090
8.9018.263
16.24916.827
-0.157-0.155-0.039-0.102-0.118-0.053
-0.62%-0.62%-0.44%-1.22%-0.72%-0.32%
Processor Util.TOTAL (H)TOTALUTIL/PROC (H)UTIL/PROCTOTAL EMUL (H)TOTAL EMULMASTER TOTAL (H)MASTER TOTALMASTER EMUL (H)MASTER EMULTVR(H)TVR
169.41169.00
84.7084.50
109.56113.00
84.1284.0047.9050.00
1.551.50
167.39167.00
83.7083.50
108.15112.00
83.0883.0047.3149.00
1.551.49
-2.01-2.00-1.01-1.00-1.41-1.00-1.04-1.00-0.59-1.000.000.00
-1.19%-1.18%-1.19%-1.18%-1.29%-0.88%-1.24%-1.19%-1.24%-2.00%0.10%
-0.30%
154 VM/ESA 2.1.0 Performance Report
VTAM 4.2.0
Table 55 (Page 2 of 3). Migration from VM/VTAM 3.4.1 to VM/VTAM 4.2.0 usingVM/ESA 2.1.0 on the 9121-480
VTAM ReleaseVM ReleaseRun ID
3.4.12.1.0
L28E190N
4.2.02.1.0
L28E190PDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
StorageNUCLEUS SIZE (V)TRACE TABLE (V)WKSET (V)PGBLPGSPGBLPGS/USERFREEPGSFREE UTILSHRPGS
2764KB400KB
8655058
29.055880.96
1363
2764KB400KB
8655102
29.055500.92
1294
0KB0KB
044
0.0-38
-0.04-69
0.00%0.00%0.00%0.08%0.08%
-0.68%-4.11%-5.06%
PagingREADS/SECWRITES/SECPAGE/CMDPAGE IO RATE (V)PAGE IO/CMD (V)XSTOR IN/SECXSTOR OUT/SECXSTOR/CMDFAST CLR/CMD
667451
16.701187.900
2.80700
0.0008.515
662448
16.677186.200
2.79700
0.0008.489
-5-3
-0.024-1.700-0.009
00
0.000-0.026
-0.75%-0.67%-0.14%-0.90%-0.34%
nanana
-0.31%
QueuesDISPATCH LISTELIGIBLE LIST
36.140.00
38.250.00
2.110.00
5.83%na
I/OVIO RATEVIO/CMDRIO RATE (V)RIO/CMD (V)NONPAGE RIO/CMD (V)DASD RESP TIME (V)MDC REAL SIZE (MB)MDC XSTOR SIZE (MB)MDC READS (I/Os)MDC WRITES (I/Os)MDC AVOIDMDC HIT RATIO
69910.442
3885.7962.989
19.50039.8
0.0207
9.49196
0.94
69410.427
3825.7392.942
19.70039.5
0.0206
9.40195
0.94
-5-0.015
-6-0.057-0.0470.200
-0.30.0-1
-0.09-1
0.00
-0.72%-0.14%-1.55%-0.98%-1.59%1.03%
-0.68%na
-0.48%-0.95%-0.51%0.00%
Additional Evaluations 155
VTAM 4.2.0
Table 55 (Page 3 of 3). Migration from VM/VTAM 3.4.1 to VM/VTAM 4.2.0 usingVM/ESA 2.1.0 on the 9121-480
VTAM ReleaseVM ReleaseRun ID
3.4.12.1.0
L28E190N
4.2.02.1.0
L28E190PDifference %Difference
EnvironmentReal StorageExp. StorageUsersVTAMsVSCSsProcessors
256MB0MB1900
102
256MB0MB1900
102
PRIVOPsPRIVOP/CMDDIAG/CMDDIAG 04/CMDDIAG 08/CMDDIAG 0C/CMDDIAG 14/CMDDIAG 58/CMDDIAG 98/CMDDIAG A4/CMDDIAG A8/CMDDIAG 214/CMDSIE/CMDSIE INTCPT/CMDFREE TOTL/CMD
13.91426.344
2.4750.7521.1260.0251.2491.2433.8012.837
11.59953.76335.48350.043
13.92226.325
2.4840.7511.1260.0251.2481.2203.8112.828
11.60253.48635.30149.895
0.008-0.0190.008
-0.0020.0000.0000.000
-0.0230.010
-0.0100.003
-0.277-0.183-0.148
0.06%-0.07%0.34%
-0.21%0.01%
-0.54%-0.02%-1.88%0.26%
-0.34%0.02%
-0.52%-0.52%-0.30%
VTAM MachinesWKSET (V)TOT CPU/CMD (V)CP CPU/CMD (V)VIRT CPU/CMD (V)DIAG 98/CMD (V)
5604.14951.51872.6308
1.243
5694.01481.51082.5040
1.220
9-0.1347-0.0079-0.1268
-0.024
1.61%-3.25%-0.52%-4.82%-1.90%
Note: T=TPNS, V=VMPRF, H=Hardware Mon i to r , Unmarked=RTM
156 VM/ESA 2.1.0 Performance Report
CMS-Intensive (FS8F)
Appendix A. Workloads
The workloads that were used to evaluate VM/ESA 2.1.0 are described in thisappendix.
CMS-Intensive (FS8F)
Workload DescriptionFS8F simulates a CMS user environment, with variations simulating a minidiskenvironment, an SFS environment, or some combination of the two. Table 56shows the search-order characteristics of the two environments used formeasurements discussed in this document.
The measurement environments have the following characteristics in common:
• A Bactrian-distribution think time averaging 30 seconds is used. (See“Glossary of Performance Terms” on page 177 for an explanation ofBactrian distribution.)
• The workload is continuous in that scripts, repeated as often as required, arealways running during the measurement period.
• Teleprocessing Network Simulator (TPNS) simulates users for the workload.TPNS runs in a separate processor and simulates LU2 terminals. User traffictravels between the processors through 3088 multisystem channelcommunication units.
Table 56. FS8F workload characteristics
Filemode ACCESSNumberof Files FS8F0R FS8FMAXR
ABCDEFGSY
R/WR/WR/OR/WR/OR/OR/OR/OR/O
1000
500500500500500
mn
minidiskminidiskminidiskminidiskminidiskminidiskminidiskminidiskminidisk
SFSSFSSFS (DS)SFSSFS (DS)SFS (DS)SFS (DS)minidiskminidisk
Note: m and n are the number of files normally found on the the S- andY-disks respectively. (DS) signifies the use of VM Data Spaces.
Copyright IBM Corp. 1995 157
CMS-Intensive (FS8F)
FS8F VariationsTwo FS8F workload variants were used for measurements, one forminidisk-based CMS users, and the other for SFS-based CMS users.
FS8F0R Workload: All filemodes are accessed as minidisk; SFS is not used. Allof the files on the C-disk have their FSTs saved in a shared segment.
FS8FMAXR Workload: All file modes, except S and Y (which SFS does notsupport), the HELP minidisk, and T-disks that are created by the workload, areaccessed as SFS directories. The CMSFILES shared segment is used. Allread-only SFS directories are defined with PUBLIC READ authority and aremapped to VM data spaces. The read/write SFS directory accessed as file modeD is defined with PUBLIC READ and PUBLIC WRITE authority. The read/writeSFS directories accessed as file modes A and B are private directories.
FS8F Licensed ProgramsThe following licensed programs were used in the FS8F measurementsdescribed in this document:
• VS COBOL II Compiler and Library V1R4M0
• Document Composition Facility V1R4M0
• VS FORTRAN Compiler/Library/Debug V2R5M0
• IBM High Level Assembler V1R1M0
• OS PL/I V2R3M0 Compiler & Library
• C & PL/I Common Library V1R2M0
• VTAM V3R4M1
• NCP V5R4M0
Measurement MethodologyA calibration is made to determine how many simulated users are required toattain the desired processor utilization for the baseline measurement. Thatnumber of users is used for all subsequent measurements on the sameprocessor and for the same environment.
The measurement proceeds as follows:
• All of the users are logged on by TPNS.
• A script is started for each user after a random delay of up to 15 minutes.(The random delay prevents all users from starting at once.)
• A stabilization period (the length depending on the processor used) isallowed to elapse so that start-up anomalies and user synchronization areeliminated.
• At the end of stabilization, measurement tools are started simultaneously togather data for the measurement interval.
• At the end of the measurement interval, the performance data is reducedand analyzed.
158 VM/ESA 2.1.0 Performance Report
CMS-Intensive (FS8F)
FS8F Script DescriptionFS8F consists of 3 initialization scripts and 17 workload scripts. The LOGESAscript is run at logon to set up the required search order and CMS configuration.Then users run the WAIT script, during which they are inactive and waiting tostart the CMSSTRT script. The CMSSTRT script is run to stagger the start ofuser activity over a 15 minute interval. After the selected interval, each userstarts running a general workload script. The scripts are summarized inTable 57.
Table 57. FS8F workload script summary
Script Name % Used Script Description
LOGESAWAITCMSSTRTASM617FASM627FXED117FXED127FXED137FXED147FCOB217FCOB417FFOR217FFOR417FPRD517FDCF517FPLI317FPLI717FWND517FWND517FLHLP517F
***555
101010
55555555825
Logon and InitializationWait stateStagger start of user activityAssemble (HLASM) and RunAssemble and RunEdit a VS BASIC ProgramEdit a VS BASIC ProgramEdit a COBOL ProgramEdit a COBOL ProgramCOBOL CompileRun a COBOL ProgramVS FORTRAN CompileFORTRAN RunProductivity Aids SessionEdit and Script a FilePL/I Optimizer SessionPL/I Optimizer SessionRun Windows with IPL CMSRun Windows with LOGON/LOGOFFUse HELP
Note: Scripts with an asterisk (*) in the “% Used” column are run only onceeach for each user during initialization.
Appendix A. Workloads 159
CMS-Intensive (FS8F)
The following are descriptions of each script used in the FS8F workload.
LOGESA: Initialization Script
LOGON useridSET AUTOREAD ONIF FS8F0R workloadTHEN
Erase extraneous files from A-diskRun PROFILE EXEC to access correct search order,
SET ACNT OFF, SPOOL PRT CL D, and TERM LINEND OFFELSE
Erase extraneous files from A-directoryRun PROFILE EXEC to set correct search order, SET ACNT OFF,
SPOOL PRT CL D, and TERM LINEND OFFENDClear the screenSET REMOTE ON
WAIT: Ten-Second PauseLeave the user inactive in a 10-second wait loop.
CMSSTRT: Random-Length PauseDelay, for up to 15 minutes, the start for each user to prevent all users fromstarting scripts at the same time.
ASM617F: Assemble (HLASM) and Run
QUERY reader and printerSPOOL PRT CLASS DXEDIT an assembler file and QQUITGLOBAL appropriate MACLIBsLISTFILE the assembler fileAssemble the file using HLASM (NOLIST option)Erase the text deckRepeat all the above except for XEDITReset GLOBAL MACLIBsLoad the text file (NOMAP option)Generate a module (ALL and NOMAP options)Run the moduleLoad the text file (NOMAP option)Run the module 2 more timesErase extraneous files from A-disk
160 VM/ESA 2.1.0 Performance Report
CMS-Intensive (FS8F)
ASM627F: Assemble (F-Assembler) and Run
QUERY reader and printerClear the screenSPOOL PRT CLASS DGLOBAL appropriate MACLIBsLISTFILE assembler fileXEDIT assembler file and QQUITAssemble the file (NOLIST option)Erase the text deckReset GLOBAL MACLIBsLoad the TEXT file (NOMAP option)Generate a module (ALL and NOMAP options)Run the moduleLoad the text file (NOMAP option)Run the moduleLoad the text file (NOMAP option)Run the moduleErase extraneous files from A-diskQUERY DISK, USERS, and TIME
XED117F: Edit a VS BASIC Program
XEDIT the programGet into input modeEnter 29 input linesQuit without saving file (QQUIT)
XED127F: Edit a VS BASIC Program
Do a FILELISTXEDIT the programIssue a GET commandIssue a LOCATE commandChange 6 lines on the screenIssue a TOP and BOTTOM commandQuit without saving fileQuit FILELISTRepeat all of the above statements, changing 9 lines instead of 6 and
without issuing the TOP and BOTTOM commands
XED137F: Edit a COBOL Program
Do a FILELISTXEDIT the programIssue a mixture of 26 XEDIT file manipulation commandsQuit without saving fileQuit FILELIST
Appendix A. Workloads 161
CMS-Intensive (FS8F)
XED147F: Edit a COBOL Program
Do a FILELISTXEDIT the programIssue a mixture of 3 XEDIT file manipulation commandsEnter 19 XEDIT input linesQuit without saving fileQuit FILELIST
COB217F: Compile a COBOL Program
Set ready message shortClear the screenLINK and ACCESS a diskQUERY link and diskLISTFILE the COBOL programInvoke the COBOL compilerErase the compiler outputRELEASE and DETACH the linked diskSet ready message longSET MSG OFFQUERY SETSET MSG ONSet ready message shortLINK and ACCESS a diskLISTFILE the COBOL programRun the COBOL compilerErase the compiler outputRELEASE and DETACH the linked diskQUERY TERM and RDYMSGSet ready message longSET MSG OFFQUERY setSET MSG ONPURGE printer
162 VM/ESA 2.1.0 Performance Report
CMS-Intensive (FS8F)
COB417F: Run a COBOL Program
Define temporary disk space for 2 disks using an EXECClear the screenQUERY DASD and format both temporary disksEstablish 4 FILEDEFs for input and output filesQUERY FILEDEFsGLOBAL TXTLIBLoad the programSet PER InstructionStart the programDisplay registersEnd PERIssue the BEGIN commandQUERY search of minidisksRELEASE the temporary disksDefine one temporary disk as anotherDETACH the temporary disksReset the GLOBALs and clear the FILEDEFs
FOR217F: Compile 6 VS FORTRAN Programs
NUCXDROP NAMEFIND using an EXECClear the screenQUERY and PURGE the readerCompile a FORTRAN programIssue INDICATE commandsCompile another FORTRAN programIssue INDICATE commandsCompile another FORTRAN programIssue INDICATE commandClear the screenCompile a FORTRAN programIssue INDICATE commandsCompile another FORTRAN programIssue INDICATE commandsCompile another FORTRAN programClear the screenIssue INDICATE commandErase extraneous files from A-diskPURGE the printer
Appendix A. Workloads 163
CMS-Intensive (FS8F)
FOR417F: Run 2 FORTRAN Programs
SPOOL PRT CLASS DClear the screenGLOBAL appropriate text librariesIssue 2 FILEDEFs for outputLoad and start a programRename output file and PURGE printerRepeat above 5 statements for two other programs, except
erase the output file for one and do not issue spool printerList and erase output filesReset GLOBALs and clear FILEDEFs
PRD517F: Productivity Aids Session
Run an EXEC to set up names file for userClear the screenIssue NAMES command and add operatorLocate a user in names file and quitIssue the SENDFILE commandSend a file to yourselfIssue the SENDFILE commandSend a file to yourselfIssue the SENDFILE commandSend a file to yourselfIssue RDRLIST command, PEEK and DISCARD a fileRefresh RDRLIST screen, RECEIVE an EXEC on B-disk, and quitTRANSFER all reader files to punchPURGE reader and punchRun a REXX EXEC that generates 175 random numbersRun a REXX EXEC that reads multiple files of various sizes from
both the A-disk and C-diskErase EXEC off B-diskErase extraneous files from A-disk
DCF517F: Edit and SCRIPT a File
XEDIT a SCRIPT fileInput 25 linesFile the resultsInvoke SCRIPT processor to the terminalErase SCRIPT file from A-disk
164 VM/ESA 2.1.0 Performance Report
CMS-Intensive (FS8F)
PLI317F: Edit and Compile a PL/I Optimizer Program
Do a GLOBAL TXTLIBPerform a FILELISTXEDIT the PL/I programRun 15 XEDIT subcommandsFile the results on A-disk with a new nameQuit FILELISTEnter 2 FILEDEFs for compileCompile PL/I program using PLIOPTErase the PL/I programReset the GLOBALs and clear the FILEDEFsCOPY names file and RENAME itTELL a group of users one pass of script runERASE names filePURGE the printer
PLI717F: Edit, Compile, and Run a PL/I Optimizer Program
Copy and rename the PL/I program and data file from C-diskXEDIT data file and QQUITXEDIT a PL/I fileIssue RIGHT 20, LEFT 20, and SET VERIFY ONChange two linesChange filename and file the resultCompile PL/I program using PLIOPTSet two FILEDEFs and QUERY the settingsIssue GLOBAL for PL/I transient libraryLoad the PL/I program (NOMAP option)Start the programType 8 lines of one data fileErase extraneous files from A-diskErase extra files on B-diskReset the GLOBALs and clear the FILEDEFsTELL another USERID one pass of script runPURGE the printer
Appendix A. Workloads 165
CMS-Intensive (FS8F)
WND517F: Use Windows
SET FULLSCREEN ONTELL yourself a message to create windowQUERY DASD and readerForward 1 screenTELL yourself a message to create windowDrop window messageScroll to top and clear windowBackward 1 screenIssue a HELP WINDOW and choose Change Window SizeQUERY WINDOWQuit HELP WINDOWSChange size of window messageForward 1 screenDisplay window messageTELL yourself a message to create windowIssue forward and backward border commands in window messagePosition window message to another locationDrop window messageScroll to top and clear windowDisplay window messageErase MESSAGE LOGFILEIPL CMSSET AUTOREAD ONSET REMOTE ON
166 VM/ESA 2.1.0 Performance Report
CMS-Intensive (FS8F)
WND517FL: Use Windows with LOGON, LOGOFF
SET FULLSCREEN ONTELL yourself a message to create windowQUERY DASD and readerForward 1 screenTELL yourself a message to create windowDrop window messageScroll to top and clear windowBackward 1 screenIssue a help window and choose Change Window SizeQUERY WINDOWQuit help windowsChange size of window messageForward 1 screenDisplay window messageTELL yourself a message to create windowIssue forward and backward border commands in window messagePosition window message to another locationDrop window messageScroll to top and clear windowDisplay window messageErase MESSAGE LOGFILELOGOFF user and wait 60 secondsLOGON user on original GRAF-IDSET AUTOREAD ONSET REMOTE ON
HLP517F: Use HELP and Miscellaneous Commands
Issue HELP commandChoose HELP CMSIssue HELP HELPGet full description and forward 1 screenQuit HELP HELPChoose CMSQUERY menuChoose QUERY menuChoose AUTOSAVE commandGo forward and backward 1 screenQuit all the layers of HELPRELEASE Z-diskCompare file on A-disk to C-disk 4 timesSend a file to yourselfChange reader copies to twoIssue RDRLIST commandRECEIVE file on B-disk and quit RDRLISTErase extra files on B-diskErase extraneous files from A-disk
Appendix A. Workloads 167
VSE Guest (PACE)
VSE Guest (PACE)
Workload DescriptionPACE is a synthetic VSE batch workload consisting of 7 unique jobs representingthe commercial environment. This set of jobs is replicated sixteen times,producing the DYNAPACE workload. The first eight copies run in eight staticpartitions and another eight copies run in four dynamic classes, each configuredwith a maximum of two partitions.
The seven jobs are as follows:
• YnDL/1
• YnSORT
• YnCOBOL
• YnBILL
• YnSTOCK
• YnPAY
• YnFORT
The programs, data, and work space for the jobs are all maintained by VSAM onseparate volumes. DYNAPACE has about a 2:1 read/write ratio.
Relationship to PACEX8In previous VM/ESA performance reports, PACEX8 was used as the batchworkload for VSE guest measurements. DYNAPACE differs from PACEX8 in thefollowing respects:
• It runs 16 copies of the PACE jobstream (instead of 8 copies).
• The additional 8 copies are run in dynamic partitions.
• For those jobs that run in dynamic partitions, it uses VSE virtual disk instorage for the COBOL compiles and the sort work files.
• The number of elliptical calculations in the FORTRAN job is increased from 4iterations to 19 for increased processor loading.
Measurement MethodologyThe VSE system is configured with the full complement of 12 static partitions(BG, and F1 through FB). F4 through FB are the partitions used to run eightcopies of PACE. Four dynamic classes, each with two partition assignments, runanother eight copies of PACE.
The partitions are configured identically except for the job classes. The jobs andthe partition job classes are configured so that the jobs are equally distributedover the partitions and so that, at any one time, the jobs currently running are amixed representation of the 7 jobs.
When the workload is ready to run, the following preparatory steps are taken:
• CICS/ICCF is active but idle
168 VM/ESA 2.1.0 Performance Report
VSE Guest (PACE)
• VTAM is active but idle
• VSE/EXPLORE is active
• The LST queue is emptied (PDELETE LST,ALL)
• The accounting file is deleted (J DEL)
Once performance data gathering is initiated for the system (hardwareinstrumentation, CP MONITOR, and RTM), the workload is started by releasingall of the batch jobs into the partitions simultaneously using the POWERcommand, PRELEASE RDR,*Y.
As the workload nears completion, various partitions will finish the work allottedto them. The finish time for both the first and last partitions is noted. ETR iscalculated as the total elapsed time from the moment the jobs are released untilthe last partition is waiting for work.
At workload completion, the ITR is calculated by dividing the number of batchjobs by average processor busy time. The processor busy time is calculated aselapsed (wall clock) time multiplied by average processor busy percent dividedby 100. The ITR value is multiplied by 60 to represent jobs per CPU busy minute.
Appendix A. Workloads 169
VSE Guest (VSECICS)
VSE Guest (VSECICS)
Workload DescriptionThe VSECICS workload consists of seven applications, written in COBOL andassembler, which include order entry, receiving and stock control, inventorytracking, production specification, banking, and hotel reservations. Theseapplications invoke a total of 17 transactions averaging approximately 6 VSAMcalls and 2 communication calls per transaction.
Four independent CICS partitions are run to effectively utilize the measuredprocessor. The storage configuration for this workload is 96MB central storageand no expanded storage. Each of the four CICS partitions accesses 8 VSAMKSDS files. Measurements are taken at the 70% and 90% processor utilizationpoints.
CICS is measured by logging on a predefined number of users, each of whichstarts running commands from 1 of 12 possible scripts. Once the systemreaches a steady state condition, the think time is adjusted to provide atransaction rate that will cause the processor to reach the target utilizationlevel.19 CICS is measured as a steady state system, over a period deemed to bea repeatable sample of work.
Software products used by the CICS workload include VSE/ESA 2.1.0, CICS/VSE*2.3.0, ACF/VTAM* 4.2.0. POWER and ACF/VTAM run in their own individualaddress spaces. This allows, among other things, virtual storage constraintrelief. Access methods used include the Sequential Access Method (SAM) andthe Virtual Storage Access Method (VSAM). CMF data is logged and thenprocessed by the CICSPARS post-processing facility. Internal response time andtotal transaction counts are gathered from the CICSPARS report. Legent ′ sEXPLORE is used to gather additional system performance data.
The workload executes a combination of COBOL and assembler applications toproduce a 40% read and 60% write mixture. Each application uses severaltransactions that employ differing sets of CICS functions. The following tableindicates the number of transactions for each application and the frequency ofspecific CICS functions within each:
19 Think time was 11 seconds for all measurements in this report.
170 VM/ESA 2.1.0 Performance Report
VSE Guest (VSECICS)
Table 58. CICS/VSE transaction characteristics
TRANSACTIONTYPE
VSAMCALLS
READ READNEXT
ADD UPDATE DELETE TRANSDATA
TEMPSTOR
% MIX
Banking 310
28 2
1 12
88
HotelReservations
22
11 1
1 33
Inventory Control 01714
15
1614 3 2
1 368
Order Entry 339
22
1199
2 1
4
1
9
1221
5555
ProductSpecification
1834
82
1032
109
Stock Control 1893
1
1
8
19
9
1
53
10
Teller System 0 4
Measurement Methodology38 DASD volumes (including DOSRES and SYSWK1) are required to run thisworkload. Each CICS (CICS01 - CICS04) has its own set of 8 dedicated volumesfor VSAM data files. There should be at least two CHPIDS to each string of datavolumes.
At every measurement point, a CICSPARS report is generated for each of thefour CICS workload systems. To determine the total transaction count, which isused to calculate the ITR, the TOTAL TASKS SELECTED fields from all CICSPARSreports are added together.
The ITR is calculated as
transactionsprocessor busy seconds
Appendix A. Workloads 171
Configuration Details
Appendix B. Configuration Details
Named Saved Segments / SystemsCMS allows the use of saved segments for shared code. Using saved segmentscan greatly improve performance by reducing end users ′ working set sizes andthereby decreasing paging. The environments in this report used the followingsaved segments:
CMS Contains the CMS nucleus and file status tables (FSTs) for the S-and Y-disks.
CMSFILES Contains the SFS server code in the DMSDAC and DMSSAC logicalsegments.
CMSPIPES Contain CMSPIPES code in the PIPES logical segment.
CMSINST Contains the execs-in-storage segment.
CMSVMLIB Contains the following logical segments:
VM/ESA 1.2.2 ..
• VMLIB contains the CSL code.
• VMMTLIB contains the CMS multitasking code.
• CMSQRYL and CMSQRYH contain the code for some CMSQUERY and SET commands. This code would otherwise beread from the S-disk when these commands are used.
VM/ESA 2.1.0 ..
• VMLIB contains the CSL code.
• DMSRTSEG contains the REXX runtime library.
HELP Contains FSTs for the HELP disk.
GOODSEG Contains FSTs for the C-disk. The C-disk is in the CMS searchorder used by the minidisk version of the FS8F workload.
FORTRAN This segment space has two members: DSSVFORT for theFORTRAN compiler and FTNLIB20 for the library compositemodules.
DSMSEG4B Contains DCF (Document Composition Facility) code.
GCSXA Contains the GCS nucleus.
VTAMXA Contains the VTAM code.
Server OptionsSFS DMSPARMS This section lists the start-up parameter settings used by eachof the SFS servers. The start-up parameters determine the operationalcharacteristics of the file pool server. The SFS servers used the followingDMSPARMS file:
172 Copyright IBM Corp. 1995
Configuration Details
ADMIN MAINT U3 OPERATOR MARKNOBACKUPFULLDUMPFILEPOOLID fp_nameNOFORMATACCOUNTCATBUFFERS 415MSGSSAVESEGID CMSFILES
USERS nnnn
For all SFS measurements, the SAVESEGID is specified to identify the segmentcontaining the file pool server runnable code. The USERS parameter is used bythe SFS server to configure itself with the appropriate number of user agents andbuffers. It is recommended that USERS be set to the administrator ′ s bestestimate of the maximum number of logged-on virtual machines that will beusing the file pool during peak usage. The ratio of logged-on users to activeusers varies greatly on actual production machines.
For more information on SFS and SFS tuning parameters, see the SFS and CRRPlanning, Administration, and Operation manual or the VM/ESA Performancemanual.
CRR DMSPARMS This section lists the start-up parameter settings used by theCRR recovery server. The start-up parameters determine the operationalcharacteristics of the CRR recovery server. The CRR server uses the followingDMSPARMS file:
ADMIN MAINT U3 OPERATOR MARKNOBACKUPFULLDUMPFILEPOOLID fp_nameNOFORMATACCOUNTMSGSSAVESEGID CMSFILESCRRLUNAME lu_name
For more information on CRR and CRR tuning parameters, see the SFS and CRRPlanning, Administration, and Operation manual or the VM/ESA Performancemanual.
Appendix B. Configuration Details 173
Master Table of Contents
Appendix C. Master Table of Contents
This appendix provides a high-level table of contents that covers all of theperformance measurement results that are published in the VM/ESAperformance reports. This information is provided in two tables. Table 59covers all performance measurement results except for migration results, whichare covered by Table 22 on page 75. Both of these tables refer to theperformance reports using the following notation:
10 VM/ESA Release 1.0 Performance Report
11 VM/ESA Release 1.1 Performance Report
20 VM/ESA Release 2.0 Performance Report
21 VM/ESA Release 2.1 Performance Report
22 VM/ESA Release 2.2 Performance Report
210 VM/ESA Version 2 Release 1.0 Performance Report (thisdocument)
See “Referenced Publications” on page 5 for more information on these reports.
Table 59 (Page 1 of 3). Sources of VM performance measurement results
Subject Report(s)
Migration see page 75
New FunctionsCoordinated Resource RecoveryVM Data Spaces (Use by SFS)3990-3 DASD Fast Write SupportCMS PipelinesInter-System Facility for Communications (ISFC)ECKD* SupportFBA DASD SupportCP ConfigurabilityDIAGNOSE Code X′250′Extended CMS File System InterfacesVirtual Disk in StorageLoad Wait State PSW ImprovementsREXX SAA* Level 2 ArchitectureMinidisk Cache EnhancementsShare Capping and Proportional DistributionSPXTAPE CommandISFC ChangesPOSIXDCEGCS TSLICE Option
1011111111 22112020202021 22212122222222210210210
174 Copyright IBM Corp. 1995
Master Table of Contents
Table 59 (Page 2 of 3). Sources of VM performance measurement results
Subject Report(s)
Special EnvironmentsCapacity of a Single VTAM/VSCS Virtual MachineAPPC/VMAPPC/VM VTAM Support (AVS)Effect of Virtual Machine Mode (370, XA, XC)Minidisk to SFSEffect of Real/Expanded Storage SizeEffect of Virtual Machine SizeLPAR PerformanceRACF* 1.9VSE Guests using Shared DASDVMSES/EVSE/ESA Guest Performance (Mode Variations)3745 Comparison to CTCAProcessor-Constrained EnvironmentRAMAC Array Family370 AccommodationRSCS 3.2DirMaint 1.5VTAM 4.2.0
10101010 11 21010 11 20111120202020 21 222121 2222210210210210210
Tuning StudiesRecommended 9221 TuningGCS IPOLL OptionUsing Expanded Storage for MDC on a 9121SET RESERVEOfficeVision* MSGFLAGS SettingCMS File Cache for SFSI/O Assist for GuestsAdjusting the Minidisk File Cache SizeVM/ESA REXX Performance GuidelinesMinidisk Cache Tuning: Restricting the ArbiterEffect of IABIAS on Response TimeUsing MDC with a Storage Constrained VSE Guest
11111111112020212121 2222210
Processor Capacity3090-600J9121-4809021-7209021-9009121-7429021-941PC Server 500
101111202122210
Appendix C. Master Table of Contents 175
Master Table of Contents
Table 59 (Page 3 of 3). Sources of VM performance measurement results
Subject Report(s)
Additional StudiesGreater than 16M of Real Storage (370 Feature)CMS Instruction Trace DataMeasurement VariabilityHigh Level Assembler EvaluationCP Monitor OverheadComparison of VM/VTAM 3.4.0 to 3.4.1FS7F to FS8F Workload Comparison
101020 2121212222
176 VM/ESA 2.1.0 Performance Report
Glossary of Performance Terms
Glossary of Performance Terms
Many of the performance terms use postscripts toreflect the sources of the data described in thisdocument. In all cases, the terms presented here aretaken directly as written in the text to allow them tobe found quickly. Often there will be multipledefinitions of the same data field, differing only in thepostscript. This allows the precise definition of eachdata field in terms of its origins. The postscripts are:
< n o n e > . No postscript indicates that the data areobtained from the VM/ESA Realtime Monitor.
(C). Denotes data from the VSE console timestampsor from the CICSPARS reports (CICS transactionperformance data).
(H). Denotes data from the internal processorinstrumentation tools.
(I). Denotes data from the CP INDICATE USERcommand.
(Q). Denotes data from the SFS QUERY FILEPOOLSTATUS command.
(QT). Denotes data from the CP QUERY TIMEcommand.
Server. Indicates that the data are for specific virtualmachines, (for example SFS, CRR, or VTAM/VSCS). Ifthere is more than one virtual machine of the sametype, these data fields are for all the virtual machinesof that type.
(S). Identifies OS/2 data from the licensed program,System Performance Monitor 2 (SPM2).
(T). Identifies data from the licensed program,Teleprocessing Network Simulator (TPNS).
(V). Denotes data from the licensed program VMPerformance Reporting Facility.
The formulas used to derive the various statistics arealso shown here. If a term in a formula is in italics,such as Total_Transmits, then a description of how itsvalue is derived is provided underneath the formula.If a term is not in italics, such as SFSTIME, then it hasan entry in the glossary describing its derivation.
Absolute Share. An ABSOLUTE share allocates to avirtual machine an absolute percentage of all theavailable system resources.
Agent. The unit of sub-dispatching within a CRR orSFS file pool server.
Agents Held. The average number of agents that arein a Logical Unit of Work (LUW). This is calculated by:
11000
× ∑f ∈ f i lepools
A g e n t _H o l d i n g _T ime fSFSTIME f
Agent_Holding_Time is from the QUERY FILEPOOL STATUS
command.
Agents In Call. The average number of agents thatare currently processing SFS server requests. This iscalculated by:
11000
× ∑f ∈ f i lepools
Fi lpool_Reques t _Serv ice _T ime fSFSTIME f
F i lepool_Request_Service_Time is from the QUERY FILEPOOL
STATUS command.
Avg Filepool Request Time (ms). The average time ittakes for a request to the SFS file pool servermachine to complete. This is calculated by:
Agents In Cal l
∑f ∈ f i lepools
Tota l _Fi lepool_Reques ts fSFSTIME f
Total_Fi lepool_Requests is from the QUERY FILEPOOL STATUS
command.
AVG FIRST (T). The average response time inseconds for the first reply that returns to the screen.For non-fullscreen commands this is the commandreflect on the screen. This is calculated by:
1
ETR (T)× ∑
t ∈ TPNS machines
Fi rs t_Response t × Total_Transmi t s tTPNS_T ime t
First_Response is the average first response given in the RSPRPT
section of the TPNS reports. Total_Transmits is the total TPNS
transmits and TPNS_Time is the run interval log time found in the
Summary of Elapsed Time and Times Executed section of the TPNS
reports.
AVG LAST (T). The average response time inseconds for the last response to the screen. If thereis more than one TPNS this is calculated by:
Copyright IBM Corp. 1995 177
Glossary of Performance Terms
1
ETR (T)× ∑
t ∈ TPNS machines
Last_Response t × Total_Transmi t s tTPNS_T ime t
Last_Response is the average last response given in the RSPRPT
section of the TPNS reports. Total_Transmits is the total TPNS
transmits and TPNS_Time is the run interval log time found in the
Summary of Elapsed Time and Times Executed section of the TPNS
reports.
AVG Lock Wait Time (ms). The average time it takesfor an SFS lock conflict to be resolved. This iscalculated by:
∑f ∈ f i lepools
Lock_Wai t _T ime fSFSTIME f
∑f ∈ f i lepools
Tota l _Lock_Con f l i c t s fSFSTIME f
Lock_Wait_Time and Total_Lock_Confl icts are both from the QUERY
FILEPOOL STATUS command.
AVG LUW Time (ms). The average duration of anSFS logical unit of work. This is calculated by:
∑f ∈ f i lepools
A g e n t _H o l d i n g _T ime fSFSTIME f
∑f ∈ f i lepools
B e g i n _LUWs fSFSTIME f
Agent_Holding_Time and Begin_LUWs are both from the QUERY
FILEPOOL STATUS command.
AVG RESP (C). The average response time inseconds for a VSE CICS transaction. This iscalculated by:
1
ETR (C)× ∑
t ∈ CICSPARS f i les
Last_Response t × Total_Transmi t s tCICS_T ime t
Last_Response is taken from the AVG TASK RESPONSE TIME line
and Total_Transmits is from the TOTAL TASKS SELECTED line the
CICSPARS reports. CICS_Time is the run interval time, which is 900
seconds for al l measurements.
AVG THINK (T). Average think time in seconds. Theaverage think time determined by TPNS for all users.This is calculated by:
1
ETR (T)× ∑
t ∈ TPNS machines
Th ink _T ime t × Total_Transmi t s tTPNS_T ime t
Think_Time is the average think time given in the RSPRPT section of
the TPNS reports. Total_Transmits is the total TPNS transmits and
TPNS_Time is the run interval log time found in the Summary of
Elapsed Time and Times Executed section of the TPNS reports.
Bactrian. A two-humped curve used to represent thethink times for both active users and users who arelogged on but inactive. The distribution includesthose long think times that occur when a user is notactively issuing commands. Actual user data werecollected and used as input to the creation of theBactrian distribution.
BFS. Byte File System
BIO Request Time (ms). Average time required toprocess a block I/O request in milliseconds. This iscalculated by:
∑f ∈ f i lepools
Tota l _BIO _Reques t _T ime fSFSTIME f
∑f ∈ f i lepools
Tota l _BIO _Reques ts fSFSTIME f
Total_BIO_Request_Time and Total_BIO_Requests are both from the
QUERY FILEPOOL STATUS command.
Blocking Factor (Blocks/BIO). The average numberof blocks read or written per Block I/O Request. Thisis calculated by:
∑f ∈ f i lepools
Tota l _DASD _B lock _Trans fe rs fSFSTIME f
∑f ∈ f i lepools
Tota l _BIO _Reques ts fSFSTIME f
Total_DASD_Block_Transfers and Total_BIO_Requests are both from
the QUERY FILEPOOL STATUS command.
Chaining Factor (Blocks/IO). The average number ofblocks read or written per I/O request. This iscalculated by:
∑f ∈ f i lepools
Tota l _DASD _B lock _Trans fe rs fSFSTIME f
∑f ∈ f i lepools
Tota l _IO_Reques ts fSFSTIME f
Total_DASD_Block_Transfers and Total_IO_Requests are both from
the QUERY FILEPOOL STATUS command.
Checkpoint. 1) In an SFS file pool server, theperiodic processing that records a consistent state ofthe file pool on DASD. 2) In a CRR recovery server,the process used to maintain the log disks. All activesyncpoint information is written to the logs.
Checkpoint Duration. The average time, in seconds,required to process an SFS checkpoint. This iscalculated by:
178 VM/ESA 2.1.0 Performance Report
Glossary of Performance Terms
11000
×
∑f ∈ f i lepools
Checkpo in t _T ime f
∑f ∈ f i lepools
Checkpo in ts _Taken f
Checkpoint_Time and Checkpoints_Taken are from the QUERY
FILEPOOL STATUS command.
Checkpoint Utilization. The percentage of time anSFS file pool server spends performing checkpoints.This is calculated by:
110
× ∑f ∈ f i lepools
Checkpo in t _T ime fSFSTIME f
Checkpoint_Time is from the QUERY FILEPOOL STATUS command.
Checkpoints Taken (delta). The number ofcheckpoints taken by all file pools on the system.This is calculated by:
∑f ∈ f i lepools
Checkpo in ts _Taken f
Checkpoints_Taken is from the QUERY FILEPOOL STATUS command.
CICSPARS. CICS Performance Analysis ReportingSystem, a licensed program that provides CICSresponse time and transaction information.
CMS BLOCKSIZE. The block size, in bytes, of theusers ′ CMS minidisks.
Command. In the context of reporting performanceresults, any user interaction with the system beingmeasured.
CP/CMD . For the FS7F, FS8F, and VSECICSworkloads, this is the average amount of CPprocessor time used per command in mill iseconds.For the PACE workload, this is the average CPprocessor time per job in seconds. This is calculatedby:
For the FS7F, FS8F, and VSECICS workloads:
10 ×(TOTAL-TOTAL EMUL)
ETR (T)
For the PACE workload:
PBT/CMD-EMUL/CMD
CP/CMD (H). See CP/CMD. This is the hardwarebased measure. This is calculated by:
For 9221 processors:
For the FS7F, FS8F, and VSECICS workloads:
CP_CPU_PCT × TOTAL (H)
10 × ETR (T)
For the PACE workload:
6000 ×CP_CPU_PCT × TOTAL (H)
ETR (H)
CP_CPU_PCT is taken from the Host CPU Busy line in the CPU
Busy/MIPs section of the RE0 report.
For all workloads running on 9121 and 9021 processors:
PBT/CMD (H)-EMUL/CMD (H)
CP CPU/CMD (V) Server. CP processor time, inmilliseconds, run in the designated server machineper command. This is calculated by:
( 1
V_T ime × ETR (T)) × ∑
s ∈ server class
(TCPU s-VCPU s)
TCPU is Total CPU busy seconds, VCPU is Virtual CPU seconds, and
V_Time is the VMPRF time interval obtained from the Resource
Util ization by User Class section of the VMPRF report.
CPU PCT BUSY (V). CPU Percent Busy. Thepercentage of total available processor time used bythe designated virtual machine. Total availableprocessor time is the sum of online time for allprocessors and represents total processor capacity(not processor usage).
This is the from the CPU Pct field in the VMPRF
USER_RESOURCE_USER report.
CPU SECONDS (V). Total CPU time, in seconds, usedby a given virtual machine. This is the Total CPUSeconds column in VMPRF ′ s USER_RESOURCE_UTILreport.
CPU UTIL (V). The percentage of total system CPUtime that is consumed by a given virtual machine.This is the CPU Pct column in VMPRF′ sUSER_RESOURCE_UTIL report.
DASD IO/CMD (V). The number of real SSCH orRSCH instructions issued to DASD, per job, used bythe VSE guest in a PACE measurement. This iscalculated by:
60 × DASD IO RATE (V)
ETR (H)
DASD IO RATE (V). The number of real SSCH orRSCH instructions per second that are issued toDASD on behalf of a given virtual machine. This isthe DASD Rate While Logged column in VMPRF ′ sUSER_RESOURCE_UTIL report.
For PACE measurements, the number of real SSCH orRSCH instructions per second issued to DASD onbehalf of the VSE guest. This is calculated by:
DASD IO TOTAL (V)
V_T ime
V_Time is taken from the time stamps at the beginning of the VMPRF
DASD Activity Ordered by Activity report.
Glossary of Performance Terms 179
Glossary of Performance Terms
DASD IO TOTAL (V). The number of real SSCH orRSCH instructions issued to DASD used by the VSEguest in a PACE measurement. This is calculated by:
∑d ∈ VSE Guest DASD
Tota l d
Total is taken from the Count column in the VMPRF DASD Activity
Ordered by Activity report for the individual DASD volumes used by
the VSE guest.
DASD RESP TIME (V). Average DASD response timein milliseconds. This includes DASD service time plus(except for page and spool volumes) any time the I/Orequest is queued in the host until the requesteddevice becomes available.
This is taken from the DASD Resp Time field in the VMPRF
SYSTEM_SUMMARY_BY_TIME report.
DCE. Distributed Computing Environment. Anindustry standard for implementing distributedcomputing.
Deadlocks (delta). The total number of SFS file pooldeadlocks that occurred during the measurementinterval summed over all production fi le pools. Adeadlock occurs when two users each request aresource that the other currently owns. This iscalculated by:
∑f ∈ f i lepools
Dead locks f
Deadlocks is from the QUERY FILEPOOL STATUS command.
DIAGNOSE. An instruction that is used to request CPservices by a virtual machine. This instruction causesa SIE interception and returns control to CP.
DIAG 04/CMD. The number of DIAGNOSE code X′ 04′instructions used per command. DIAGNOSE codeX′ 04′ is the privilege class C and E CP function call toexamine real storage. This is a product-sensitiveprogramming interface. This is calculated by:
DIAG_04
RTM_Time × ETR (T)
DIAG_04 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG 08/CMD. The number of DIAGNOSE code X′ 08′instructions used per command. DIAGNOSE codeX′ 08′ is the CP function call to issue CP commandsfrom an application. This is calculated by:
DIAG_08
RTM_Time × ETR (T)
DIAG_08 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG 0C/CMD. The number of DIAGNOSE codeX′ 0C′ instructions used per command. DIAGNOSEcode X′ 0C′ is the CP function call to obtain the timeof day, virtual CPU time used by the virtual machine,and total CPU time used by the virtual machine. Thisis calculated by:
DIAG_0C
RTM_Time × ETR (T)
DIAG_0C is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG 10/CMD. The number of DIAGNOSE code X′ 10′instructions used per command. DIAGNOSE codeX′ 10′ is the CP function call to release pages ofvirtual storage. This is calculated by:
DIAG_10
RTM_Time × ETR (T)
DIAG_10 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG 14/CMD. The number of DIAGNOSE code X′ 14′instructions used per command. DIAGNOSE codeX′ 14′ is the CP function call to perform virtual spoolI/O. This is calculated by:
DIAG_14
RTM_Time × ETR (T)
DIAG_14 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG 58/CMD. The number of DIAGNOSE code X′ 58′instructions used per command. DIAGNOSE codeX′ 58′ is the CP function call that enables a virtualmachine to communicate with 3270 virtual consoles.This is calculated by:
DIAG_58
RTM_Time × ETR (T)
DIAG_58 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG 98/CMD. The number of DIAGNOSE code X′ 98′instructions used per command. This allows aspecified virtual machine to lock and unlock virtualpages and to run its own channel program. This iscalculated by:
DIAG_98
RTM_Time × ETR (T)
DIAG_98 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG 98/CMD (V) VTAM Servers. See DIAG 98/CMDfor a description of this instruction. This representsthe sum of all DIAGNOSE code X′ 98′ instructions percommand for all VTAM and VSCS servers. This iscalculated by:
180 VM/ESA 2.1.0 Performance Report
Glossary of Performance Terms
DIAG_98_V T A M + DIAG_98_VSCS
ETR (T)
DIAG_98_VTAM and DIAG_98_VSCS are taken from the VMPRF Virtual
Machine Communication by User Class report for the VTAM and VSCS
server classes respectively.
DIAG A4/CMD. The number of DIAGNOSE codeX′ A4 ′ instructions used per command. DIAGNOSEcode X′ A4 ′ is the CP function call that supportssynchronous I/O to supported DASD. This iscalculated by:
DIAG_A4
RTM_Time × ETR (T)
DIAG_A4 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG A8/CMD. The number of DIAGNOSE codeX′ A8 ′ instructions used per command. DIAGNOSEcode X′ A8 ′ is the CP function call that supportssynchronous general I/O to fully supported devices.This is calculated by:
DIAG_A8
RTM_Time × ETR (T)
DIAG_A8 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG 214/CMD. The number of DIAGNOSE codeX′ 214′ instructions used per command. DIAGNOSEcode X′ 214′ is used by the Pending Page Releasefunction. This is calculated by:
DIAG_214
RTM_Time × ETR (T)
DIAG_214 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG 268/CMD. The number of DIAGNOSE codeX′ 268′ instructions used per command. DIAGNOSEcode X′ 268′ is used by the CMS370AC function. Thisis calculated by:
DIAG_268
RTM_Time × ETR (T)
DIAG_268 is taken from the TOTALCNT column on the RTM PRIVOPS
screen. RTM_Time is the total RTM time interval.
DIAG/CMD . The total number of DIAGNOSEinstructions used per command or job. This iscalculated by:
For the FS7F, FS8F, and VSECICS workloads:
1
(ETR (T) × RTM_T ime )× ∑
x ∈ DIAGNOSE
TOTALCNT x
For the PACE workload:
60
(ETR (H) × RTM_T ime )× ∑
x ∈ DIAGNOSE
TOTALCNT x
TOTALCNT is the count for the individual DIAGNOSE codes taken over
the total RTM time interval on the RTM PRIVOPS Screen. RTM_Time
is the total RTM time interval taken from the RTM PRIVOPS screen.
DISPATCH LIST. The average over time of thenumber of virtual machines (including loading virtualmachines) in any of the dispatch list queues (Q0, Q1,Q2 and Q3).
1N u m _Entr ies
× ∑t ∈ SCLOG entr ies
Q0 t + Q0Lt + Q1 t + Q1Lt + Q2 t + Q2Lt + Q3 t + Q3Lt
Q0 t, Q0Lt .. are from the Q0CT, Q0L ... columns in the RTM SCLOG
screen. Num_Entries is the total number of entries in the RTM SCLOG
screen.
DPA (Dynamic Paging Area). The area of realstorage used by CP to hold virtual machine pages,pageable CP modules and control blocks.
EDF. Enhanced Disk Format. This refers to the CMSminidisk file system.
Elapsed Time (C). The total time, in seconds,required to execute the PACE batch workload.
This is calculated using the timestamps that appear on the console of
the VSE/ESA guest virtual machine. The time the first job started is
subtracted from the time the last job ended.
ELIGIBLE LIST. The average over time of the numberof virtual machines (including loading virtualmachines) in any of the eligible list queues (E0, E1, E2and E3).
1N u m _Entr ies
× ∑t ∈ SCLOG entr ies
E0 t + E0Lt + E1 t + E1Lt + E2 t + E2Lt + E3 t + E3Lt
E0 t, E0Lt .. are from the E0CT, E0L ... columns in the RTM SCLOG
screen. Num_Entries is the total number of entries in the RTM SCLOG
screen.
EMUL ITR. Emulation Internal Throughput Rate. Theaverage number of transactions completed persecond of emulation time.
This is from the EM_ITR field under TOTALITR of the RTM
TRANSACT screen.
EMUL/CMD. For the FS7F, FS8F, and VSECICSworkloads, this is the amount of processor time spentin emulation mode per command in mill iseconds. Forthe PACE workload, this is the emulation processortime per job in seconds.
For the FS7F, FS8F, and VSECICS workloads, this is calculated by:
10 × TOTAL EMUL
ETR (T)
For the PACE workload, this is calculated by:
Glossary of Performance Terms 181
Glossary of Performance Terms
6000 × TOTAL EMUL
ETR (H)
EMUL/CMD (H). See EMUL/CMD. This is thehardware based measurement.
For the FS7F, FS8F, and VSECICS workloads, this is calculated by:
10 × TOTAL EMUL (H)
ETR (T)
For the PACE workload, this is calculated by:
6000 × TOTAL EMUL (H)
ETR (H)
ETR. External Throughput Rate. The number ofcommands completed per second, computed by RTM.
This is found in the NSEC column for ALL_TRANS for the total RTM
interval t ime on the RTM Transaction screen.
ETR (C). See ETR. The external throughput rate forthe VSE guest measurements. For the PACE workloads, it is calculated by:
60 × Jobs
Elapsed Time (C)
Jobs is the number of jobs run in the workload. The values of Jobs
are 28, 42, 56, and 112 for the PACEX4, PACEX6, PACEX8, and
DYNAPACE workloads respectively.
For the VSECICS workload, it is calculated by:
1CICS_T ime
× ∑t ∈ CICSPARSf i les
Tota l _Transmi t s t
Total_Transmits is from the TOTAL TASKS SELECTED line in the
CICSPARS reports. CICS_Time is the run interval time, which is 900
seconds for al l measurements.
ETR (T). See ETR. TPNS-based calculation of ETR. Itis calculated by:
∑t ∈ TPNS machines
Tota l _Transmi t s tTPNS_T ime t
Total_Transmits is found in the Summary of Elapsed Time and Times
Executed section of TPNS report (TOTALS for XMITS by TPNS).
TPNS_Time is the last time in requested (reduction) period minus the
first t ime in requested (reduction) period. These times follow the
Summary of Elapsed Time in the TPNS report.
ETR RATIO. This is the ratio of the RTM-based ETRcalculation and the TPNS-based ETR calculation. Thisis calculated by:
ETR
ETR (T)
Expanded Storage. An optional integrated high-speedstorage facility, available on certain processors, thatallows for the rapid transfer of 4KB blocks betweenitself and real storage.
Exp. Storage. See expanded storage.
External Response Time. The average responsetime, in seconds, for the last response to the screen.See AVG LAST (T).
FAST CLR/CMD. The number of fast path clears ofreal storage per command or job. This includes V=Rand regular guests. This is calculated by:
For the FS7F, FS8F, and VSECICS workloads:
Fast_Clear _Sec
ETR (T)
For the PACE workload:
60 ×Fast_Clear _Sec
ETR (H)
Fast_Clear_Sec is taken from the NSEC column for the total RTM time
interval for the FAST_CLR entry on the RTM SYSTEM screen.
File Pool. In SFS, a collection of minidisks managedby a server machine.
FP REQ/CMD (Q). Total file pool requests percommand. This is calculated by:
∑f ∈ f i lepools
Tota l _Fi lepool_Reques ts fSFSTIME f
Total_Fi lepool_Requests is from the QUERY FILEPOOL STATUS
command.
FREE TOTL/CMD. The number of requests for freestorage per command or job. This includes V=R andregular guests. This is calculated by:
For the FS7F, FS8F, and VSECICS workloads:
Free_Tota l _Sec
ETR (T)
For the PACE workload:
60 ×Free_Tota l _Sec
ETR (H)
Free_Total_Sec is taken from the NSEC column for the total RTM time
interval on the RTM SYSTEM screen.
FREE UTIL. The proportion of the amount ofavailable free storage actually used. This iscalculated by:
Free_Size
FREEPGS × 4096
Free_Size is found in the FREE column for the total RTM time interval
(<-..) on the RTM SYSTEM screen.
FREEPGS. The total number of pages used for FREEstorage (CP control blocks).
This is found in the FPGS column for the total RTM time interval
(<-..) on the RTM SYSTEM screen.
182 VM/ESA 2.1.0 Performance Report
Glossary of Performance Terms
FST (File Status Table). CMS control block thatcontains information about a file belonging to aminidisk or SFS directory.
GB. Gigabytes. 1024 megabytes.
GUEST SETTING. This field represents the type ofVSE guest virtual machine in a PACE measurement.This f ields possible values are V=V, V=F or V=R.
GUESTWT/CMD. The number of entries into guestenabled wait state per job. This is calculated by:
60 × GUESTWT/SEC
ETR (H)
GUESTWT/SEC. The number of entries into guestenabled wait state per second.
This field is taken from the NSEC column for the RTM total count
since last reset, for the GUESTWT field in the RTM SYSTEM screen.
Hardware Instrumentation. See ProcessorInstrumentation
HT5. One of the CMS-intensive workloads used in theLarge Systems Performance Reference (LSPR) toevaluate relative processor performance.
IML MODE. This is the hardware IML mode used inVSE guest measurements. The possible values forthis field are 370, ESA, or LPAR.
Instruction Path Length. The number of machineinstructions used to run a given command, function orpiece of code.
Internal Response Time. The response time as seenby CP. This does not include line or terminal delays.
IO TIME/CMD (Q). Total elapsed time in secondsspent doing SFS file I/Os per command. This iscalculated by:
1
(1000 × ETR (T))× ∑
(f ∈ f i lepools)
Tota l _BIO _Reques t _T ime fSFSTIME f
Total_BIO_Request_Time is from the QUERY FILEPOOL STATUS
command.
IO/CMD (Q). SFS file I/Os per command. This iscalculated by:
1
ETR (T)× ∑
f ∈ f i lepools
Tota l _IO_Reques ts fSFSTIME f
Total_IO_Requests is from the QUERY FILEPOOL STATUS command.
ISFC. Inter-System Facility for Communications
ITR. Internal Throughput Rate. This is the number ofunits of work accomplished per unit of processor busytime in an nonconstrained environment. For theFS7F, FS8F, and VSECICS workloads this isrepresented as commands per processor second. Forthe PACE workload, this is represented as jobs perprocessor minute. This is calculated by:
For the FS7F, FS8F, and VSECICS workloads, this is found from the
TOTALITR for SYS_ITR on the RTM TRANSACT screen.
For the PACE workload:
100 × ETR (H)
UTIL/PROC
ITR (H). See ITR. This is the hardware basedmeasure. In this case, ITR is measured in externalcommands per unit of processor busy time. For theFS7F, FS8F, and VSECICS workloads this isrepresented as commands per processor second,while for the PACE workload this is represented injobs per processor minute. This is calculated by:
For the FS7F, FS8F, and VSECICS workloads:
100 × ETR (T)
TOTAL (H)
For the PACE workloads:
6000 × Jobs
Elapsed t ime (H) × UTIL/PROC (H)
Jobs is the number of jobs run in the workload. The values of Jobs
are 28, 42, 56, and 112 for the PACEX4, PACEX6, PACEX8, and
DYNAPACE workloads respectively.
ITRR. Internal Throughput Rate Ratio. This is theRTM based ITR normalized to a specific run. This iscalculated by:
ITRITR1
ITR1 is the ITR of the first run in a given table.
ITRR (H). See ITRR. This is the ITR (H) normalizedto a specific run. This is calculated by:
ITR (H)
ITR (H)1
ITR (H)1 is the ITR (H) of the first run in a given table.
Inter-user Communication Vehicle (IUCV). A VMgeneralized CP interface that helps the transfer ofmessages either among virtual machines or betweenCP and a virtual machine.
I/O Req/sec (S). I/O requests per second. This isAccess Rate, taken from the SPM/2 DISK report,summed over all the Physical IDs that the S/390workload is using.
Glossary of Performance Terms 183
Glossary of Performance Terms
k. Multiple of 1000.
Kb. Kilobits. One kilobit is 1024 bits.
KB. Kilobytes. One kilobyte is 1024 bytes.
LUW Rollbacks (delta). The total number of SFSlogical units of work that were backed out during themeasurement interval, summed over all productionfile pools. This is calculated by:
∑f ∈ f i lepools
LUW _Rol lbacks f
LUW_Rol lbacks is from the QUERY FILEPOOL STATUS command.
MASTER EMUL. Total emulation state utilization forthe master processor. For uniprocessors this is thesame as TOTAL EMUL and is generally not shown.this is the same as
This is taken from the %EM column for the f irst processor l isted in
the LOGICAL CPU STATISTICS section of the RTM CPU screen. The
total RTM interval t ime value is used (<-..).
MASTER EMUL (H). Total emulation state util izationfor the master processor. For uniprocessors this isthe same as TOTAL EMUL and is generally not shown.This is the hardware based calculation.
This is taken from the %CPU column of the GUES-CPn line of the
REPORT fi le for the master processor number as shown by RTM. In
RTM, the first processor listed on the CPU screen is the master
processor.
MASTER TOTAL. Total utilization of the masterprocessor. For uniprocessor this is the same asTOTAL and is generally not shown.
This is taken from the %CPU column for the first processor l isted in
the LOGICAL CPU STATISTICS section of the RTM CPU screen. The
total RTM interval t ime value is used (<-..).
MASTER TOTAL (H). Total utilization of the masterprocessor. For uniprocessor this is the same asTOTAL (H) and is generally not shown. This is thehardware based calculation.
This is taken from the %CPU column of the SYST-CPn line of the
REPORT fi le for the master processor number as shown by RTM. In
RTM, the first processor listed on the CPU screen is the master
processor.
MB . Megabytes. One megabyte is 1,048,576 bytes.
MDC AVOID. The number of DASD read I/Os persecond that were avoided through the use of minidiskcaching.
For VM releases prior to VM/ESA 1.2.2, this is taken from the NSEC
column for the RTM MDC_IA field for the total RTM time interval on
the RTM SYSTEM screen.
For VM/ESA 1.2.2 and higher, this is taken from the NSEC column for
the RTM VIO_AVOID field for the total RTM time interval on the RTM
MDCACHE screen.
MDC HIT RATIO. Minidisk Cache Hit Ratio. For VMreleases prior to VM/ESA 1.2.2, the number of blocksfound in the minidisk cache for DASD read operationsdivided by the total number of blocks read that areeligible for minidisk caching.
This is from the MDHR field for the total RTM time interval (<-..) on
the RTM SYSTEM screen.
For VM/ESA 1.2.2 and higher, the number of I/Osavoided by minidisk caching divided by the totalnumber of virtual DASD read requests (except forpage, spool, and virtual disk in storage requests).
This is from the MDHR field for the total RTM time interval (<-..) on
the RTM MDCACHE screen.
MDC MODS. Minidisk Cache Modifications. Thenumber of times per second blocks were written inthe cache, excluding the writes that occurred as aresult of minidisk cache misses. This measure onlyapplies to VM releases prior to VM/ESA 1.2.2.
This is taken from the NSEC column for the RTM MDC_MO field for
the total RTM time interval on the RTM SYSTEM screen.
MDC READS (blks). Minidisk Cache Reads. Thenumber of times per second blocks were found in thecache as the result of a read operation. This measureonly applies to VM releases prior to VM/ESA 1.2.2.
This is taken from the NSEC column for the RTM MDC_HT field for the
total RTM time interval on the RTM SYSTEM screen.
MDC READS (I/Os). Minidisk Cache Reads. The totalnumber of virtual read I/Os per second that read datafrom the minidisk cache. This measure does notapply to VM releases prior to VM/ESA 1.2.2.
This is taken from the NSEC column for the RTM MDC_READS field
for the total RTM time interval on the RTM MDCACHE screen.
MDC REAL SIZE (MB). The size, in megabytes, of theminidisk cache in real storage. This measure doesnot apply to VM releases prior to VM/ESA 1.2.2.
This is the ST_PAGES count on the RTM MDCACHE screen, divided
by 256.
MDC WRITES (blks). Minidisk Cache Writes. Thenumber of CMS Blocks moved per second from main
184 VM/ESA 2.1.0 Performance Report
Glossary of Performance Terms
storage to expanded storage. This measure onlyapplies to VM releases prior to VM/ESA 1.2.2.
This is taken from the NSEC column for the RTM MDC_PW field for
the total RTM time interval on the RTM SYSTEM screen.
MDC WRITES (I/Os). Minidisk Cache Writes. Thetotal number of virtual write I/Os per second thatwrite data into the minidisk cache. This measuredoes not apply to VM releases prior to VM/ESA 1.2.2.
This is taken from the NSEC column for the RTM MDC_WRITS field
for the total RTM time interval on the RTM MDCACHE screen.
MDC XSTOR SIZE (MB). The size, in megabytes, ofthe minidisk cache in expanded storage.
For VM releases prior to VM/ESA 1.2.2, this is MDNE for the total
RTM time interval (<-..) on the RTM SYSTEM screen, divided by 256.
For VM/ESA 1.2.2 and higher, this is the XST_PAGES count on the
RTM MDCACHE screen, divided by 256.
Millisecond. One one-thousandth of a second.
Minidisk Caching. Refers to a CP facility that uses aportion of storage as a read cache of DASD blocks. Itis used to help eliminate I/O bottlenecks and improvesystem response time by reducing the number ofDASD read I/Os. Prior to VM/ESA 1.2.2, the minidiskcache could only reside in expanded storage and onlyapplied to 4KB-formatted CMS minidisks accessed viadiagnose or *BLOCKIO interfaces. Minidisk cachingwas redesigned in VM/ESA 1.2.2 to remove theserestrictions. With VM/ESA 1.2.2, the minidisk cachecan reside in real and/or expanded storage and theminidisk can be in any format. In addition to thediagnose and *BLOCKIO interfaces, minidisk cachingnow also applies to DASD accesses that are doneusing SSCH, SIO, or SIOF.
Minidisk File Cache. A buffer used by CMS when afile is read or written to sequentially. When a file isread sequentially, CMS reads ahead as many blocksas will fit into the cache. When a file is writtensequentially, completed blocks are accumulated untilthe cache is filled and then are written out together.
MPG. Multiple preferred guests is a facility on aprocessor that has the Processor Resource/SystemsManager* (PR/SM*) feature installed. This facilitysupports up to 6 preferred virtual machines. One canbe V=R, the others are V=F.
ms. Millisecond.
Native. Refers to the case where an operatingsystem is run directly on the hardware as opposed tobeing run as a guest on VM.
Non-shared Storage. The portion of a virtualmachine ′ s storage that is unique to that virtualmachine, (as opposed to shared storage such as asaved segment that is shared among virtualmachines). This is usually represented in pages.
NONPAGE RIO/CMD (V). The number of real SSCHand RSCH instructions issued per command forpurposes other than paging. This is calculated by:
RIO/CMD (V) − PAGE IO/CMD (V)
NONTRIV INT. Non-trivial Internal response time inseconds. The average response time for transactionsthat completed with more than one drop from Q1 orone or more drops from Q0, Q2, or Q3 per second.
This is from TOTALTTM for the RTM NTRIV field on the RTM
TRANSACT screen.
Non-Spool I/Os (I). Non-spool I/Os done by a givenvirtual machine. This is calculated from INDICATEUSER data obtained before and after the activitybeing measured. The value shown is final IO - initialIO.
NPDS. No Page Data-Set. A VSE/ESA option, whenrunning on VM/ESA as a V=V guest, that eliminatespaging by VSE/ESA for improved efficiency. Al lpaging is done by VM/ESA.
NUCLEUS SIZE (V). The resident CP nucleus size inkilobytes.
This is from the <K bytes> column on the Total Resident Nucleus
line in the VMPRF System Configuration Report.
OSA. IBM S/390 Open Systems Adapter. Anintegrated S/390 hardware feature that provides anS/390 system with direct access to Token Ring,Ethernet, and FDDI local area networks.
PAGE/CMD. The number of pages moved betweenreal storage and DASD per command or job. This iscalculated by:
For the FS7F, FS8F, and VSECICS workloads:
READS/SEC + WRITES/SEC
ETR (T) )
For the PACE workload:
60 × READS/SEC + WRITES/SEC
ETR (H)
PAGE IO RATE (V). The number of real SSCH orRSCH instructions issued on behalf of system paging.
Glossary of Performance Terms 185
Glossary of Performance Terms
This is the sum of all the entries in the SSCH+RSCH column for Page
devices listed in the VMPRF DASD System Areas by Type report.
PAGE IO/CMD (V). The number of real SSCH andRSCH instructions issued per command on behalf ofsystem paging. This is calculated by:
PAGE IO RATE (V)
ETR (T)
Path length. See Instruction Path Length
PBT/CMD. For the FS7F, FS8F, and VSECICSworkloads, this is the number of milliseconds ofprocessor activity per command. For the PACEworkload, this is the number of seconds of processoractivity per job. This is calculated by:
For the FS7F, FS8F, and VSECICS workloads:
10 × TOTAL
ETR (T)
For the PACE workload:
6000 × TOTAL
ETR (H)
PBT/CMD (H). See PBT/CMD. This is the hardwarebased measure.
For the FS7F, FS8F, and VSECICS workloads:
10 × TOTAL (H)
ETR (T)
For the PACE workload:
6000 × TOTAL (H)
ETR (H)
PC Utilization (S). PC processor utilization. This isProcessor % Util from the CPU section of the SPM2report.
PD4. One of the CMS-intensive workloads used inthe Large Systems Performance Reference (LSPR) toevaluate relative processor performance.
PGBLPGS. The number of system pageable pagesavailable.
This is from the PPAG field for the total RTM time interval (<-) on the
RTM SYSTEM screen.
PGBLPGS/USER. The number of system pageablepages available per user. This is calculated by:
PGBLPGSUSERS
POSIX. A set of IEEE standards that define astandard set of programming and command interfaces
based on those provided by the various UNIXimplementations.
Privileged Operation. Any instruction that must berun in supervisor state.
PRIVOP/CMD. The number of virtual machineprivileged instructions simulated per command or job.This does not include DIAGNOSE instructions. This iscalculated by:
For the FS7F, FS8F, and VSECICS workloads:
1
(ETR (T) ) × RTM _T ime )× ∑
x ∈ pr ivops
TOTALCNT x
For the PACE workload:
60
(ETR (H) × RTM_T ime )× ∑
x ∈ pr ivops
TOTALCNT x
TOTALCNT is the count for the individual privop taken over the total
RTM time interval on the RTM PRIVOPS Screen. RTM_Time is the
total RTM time interval taken from the RTM PRIVOPS screen. Note :
PRIVOPS are recorded differently in 370 and XA modes.
PRIVOPS (Privileged Operations). See PrivilegedOperation.
Processor Instrumentation. An IBM* internal toolused to obtain hardware-related data such asprocessor util izations.
Processor Utilization. The percent of time that aprocessor is not idle.
Processors. The data field denoting the number ofprocessors that were active during a measurement.
This is from the NC field under CPU statistics on the RTM CPU
screen.
PSU. Product Service Upgrade
Production File Pool. An SFS file pool in which usersare enrolled with space. All SFS read/write activity isto production file pools.
QUICKDSP ON. When a virtual machine is assignedthis option, it bypasses the normal scheduleralgorithm and is placed on the dispatch listimmediately when it has work to do. It does notspend time in the eligible lists. QUICKDSP can bespecified either via a CP command or in the CPdirectory entry.
RAID. Redundant array of independent DASD.
RAMAC . A family of IBM storage products based onRAID technology. These include the RAMAC ArraySubsystem and the RAMAC Array DASD.
186 VM/ESA 2.1.0 Performance Report
Glossary of Performance Terms
READS/SEC. The number of pages read per seconddone for system paging.
This is taken from the NSEC column for the PAGREAD field for the
total RTM time interval on the RTM SYSTEM screen.
Real Storage. The amount of real storage used for aparticular measurement.
Relative Share. A relative share allocates to a virtualmachine a portion of the total system resourcesminus those resources allocated to virtual machineswith an ABSOLUTE share. A virtual machine with aRELATIVE share receives access to system resourcesthat is proportional with respect to other virtualmachines with RELATIVE shares.
RESERVE. See SET RESERVED
RESIDENT PAGES (V). The average number ofnonshared pages of central storage that are held by agiven virtual machine. This is the Resid StoragePages column in VMPRF ′ s USER_RESOURCE_UTILreport.
RIO/CMD (V). The number of real SSCH and RSCHinstructions issued per command. This is calculatedby:
For the FS7F, FS8F, and VSECICS workloads:
RIO RATE (V)
ETR (T)
For the PACE workload:
60 × RIO RATE (V)
ETR (H)
RIO RATE (V). The number of real SSCH and RSCHinstructions issued per second.
This is taken from the I/O Rate column for the overall average on the
VMPRF System Performance Summary by Time report; the value
reported does not include assisted I/Os.
Rollback Requests (delta). The total number of SFSrollback requests made during a measurement. Thisis calculated by:
∑f ∈ f i lepools
Rol lback _Reques ts f
Rol lback_Requests is from the QUERY FILEPOOL STATUS command.
Rollbacks Due to Deadlock (delta). The total numberof LUW rollbacks due to deadlock that occurred duringthe measurement interval over all production fi lepools. A rollback occurs whenever a deadlockcondition cannot be resolved by the SFS server. Thisis calculated by:
∑f ∈ f i lepools
Rol lbacks _Due _to _Dead lock f
Rol lbacks_Due_to_Deadlock is from the QUERY FILEPOOL STATUS
command.
RPC. Remote Procedure Call. A client request to aservice provider located anywhere in the network.
RSU. Recommend Service Upgrade
RTM . Realtime Monitor. A licensed programrealtime monitor and diagnostic tool for performancemonitoring, analysis, and problem solving.
RTM/ESA. See RTM.
Run ID. An internal use only name used to identify aperformance measurement.
SAC Calls / FP Request. The average number ofcalls within the SFS server to its Storage AccessComponent (SAC) per file pool request. Inenvironments where there are multiple fi le pools, thisaverage is taken over all fi le pool servers. This iscalculated by:
∑f ∈ f i lepools
Sac _Cal ls fSFSTIME f
∑f ∈ f i lepools
Tota l _Fi lepool_Reques ts fSFSTIME f
Sac_Cal ls and Total_Fi lepool_Requests are from the QUERY
FILEPOOL STATUS command.
Seconds Between Checkpoints. The average numberof seconds between SFS file pool checkpoints in theaverage file pool. This is calculated by:
1
∑f ∈ f i lepools
Checkpo in ts _Taken fSFSTIME f
Checkpoints_Taken is from the QUERY FILEPOOL STATUS command.
SET RESERVED (Option). This is a CP command thatcan be used to allow a V=V virtual machine to havea specified minimum number of pages resident in realstorage. It is used to reduce paging and improveperformance for a given virtual machine.
SFSTIME. The elapsed time in seconds betweenQUERY FILEPOOL STATUS invocations for a given filepool done at the beginning and end of ameasurement.
SFS TIME/CMD (Q). Total elapsed time percommand, in seconds, required to process SFS serverrequests. This is calculated by:
Glossary of Performance Terms 187
Glossary of Performance Terms
1
ETR (T)× ∑
f ∈ f i lepools
Fi lepool_Reques t _Serv ice _T ime fSFSTIME f
F i lepool_Request_Service_Time is from the QUERY FILEPOOL
STATUS command.
SHARE. The virtual machine ′ s SHARE setting. TheSET SHARE command and the SHARE directorystatement allow control of the percentage of systemresources a virtual machine receives. Theseresources include processors, real storage and pagingI/O capability. A virtual machine receives itsproportion of these resources according to its SHAREsetting. See Relative and Absolute Share.
Shared Storage. The portion of a virtual machinesstorage that is shared among other virtual machines(such as saved segments). This is usuallyrepresented in pages.
SHRPGS. The number of shared frames currentlyresident.
SIE. ESA Architecture instruction to StartInterpretive Execution. This instruction is used to runa virtual machine in emulation mode.
SIE INTCPT/CMD. The number of exits from SIEwhich are SIE interceptions per command or job. SIEis exited either by interception or interruption. Anintercept is caused by any condition that requires CPinteraction such as I/O or an instruction that has to besimulated by CP. This is calculated by:
Percen t _I n te rcep t × SIE/CMD
100
Percent_Intercept is taken from the %SC field for average of all
processors for the total RTM time interval (<-..) on the RTM CPU
screen.
SIE/CMD. SIE instructions used per command or job.This is calculated by:
For the FS7F, FS8F, and VSECICS workloads:
SIE_SEC
ETR (T)
For the PACE workload:
60 ×SIE_SEC
ETR (H)
SIE_SEC is taken from the XSI field for the total for all processors for
the total RTM time interval (<-..) on the RTM CPU screen.
SPM2. System Performance Monitor 2. An IBMlicensed program that collects and reportsperformance data for an OS/2 system.
STARS. System Trace Analysis Reports. Providesvarious reports based on the analysis of instructiontrace data.
S/390 Real Storage. On an IBM PC Server 500system, the amount of real storage that is available tothe System/390 processor.
T/V Ratio. See TVR
TOT CPU/CMD (V) Server. The total amount ofprocessor time, in mill iseconds, for the server virtualmachine(s). This is calculated by:
1
(V_T ime × ETR (T))× ∑
s ∈ server class
Tota l _CPU_Secs s
Total_CPU_Secs and V_Time are from the Resource Util ization by
User Class section of the VMPRF reports.
TOT INT. Total Internal Response Time in seconds.Internal response time averaged over all trivial andnon-trivial transactions.
This is the value for TOTALTTM for ALL_TRANS on the RTM
TRANSACT screen.
TOT INT ADJ. Total internal response time (TOT INT)reported by RTM, adjusted to reflect what theresponse time would have been had CP seen theactual command rate (as recorded by TPNS). This isa more accurate measure of internal response timethan TOT INT. In addition, TOT INT ADJ can bedirectly compared to external response time (AVGLAST (T)) as they are both based on the same,TPNS-based measure of command rate. This iscalculated by:
TOT INT × ETR RATIO
TOTAL. The total processor utilization for a givenmeasurement summed over all processors.
This comes from the %CPU column for al l processors for the total
RTM interval t ime (<-..) on the RTM CPU screen.
TOTAL (H). See TOTAL. This is the hardware basedmeasurement.
For 9221 processors, this is taken from the Total CPU Busy line in the
CPU Busy/Mips section of the RE0 report.
For 9121 and 9021 processors, this is calculated by:
UTIL/PROC (H) × PROCESSORS
Total CPU (I). Total CPU time, in seconds, used by agiven virtual machine. This is calculated fromINDICATE USER data obtained before and after the
188 VM/ESA 2.1.0 Performance Report
Glossary of Performance Terms
activity being measured. The value shown is finalTTIME - initial TTIME.
Total CPU (QT). Total CPU time, in seconds, used bya given virtual machine. This is calculated fromQUERY TIME data obtained before and after theactivity being measured. The value shown is finalTOTCPU - initial TOTCPU.
TOTAL EMUL. The total emulation state time for allusers across all online processors. This indicates thepercentage of time the processors are in emulationstate.
This comes from the %EM column for al l processors for the total RTM
interval t ime (<-..) on the RTM CPU screen.
TOTAL EMUL (H). The total emulation state time forall users across all online processors. This indicatesthe percentage of time the processors are inemulation state. This is calculated by:
For 9221 processors, this comes from the SIE CPU Busy / Total CPU
Busy (PCT) line in the RE0 report.
For 9121 and 9021 processors, this comes from the %CPU column for
the GUES-ALL line of the REPORT file times the number of
processors.
Total Time (QT). Elapsed time, in seconds. This iscalculated from QUERY TIME data obtained beforeand after the activity being measured. The valueshown is the final CONNECT timestamp - the initialCONNECT timestamp, converted to seconds.
TPNS. Teleprocessing Network Simulator. A licensedprogram terminal and network simulation tool thatprovides system performance and response timeinformation.
Transaction. A user/system interaction as counted byCP. For a single-user virtual machine a transactionshould roughly correspond to a command. It does notinclude network or transmission delays and mayinclude false transactions. False transactions can bethose that wait for an external event, causing them tobe counted as multiple transactions, or those thatprocess more than one command without droppingfrom queue, causing multiple transactions to becounted as one.
TRACE TABLE (V). The size in kilobytes of the CPtrace table.
This is the value of the <K bytes> column on the Trace Table l ine in
the VMPRF System Configuration Report.
Transaction (T). This is the interval from the time thecommand is issued until the last receive prior to the
next send. This includes clear screens as a result ofan intervening MORE... or HOLDING condition.
TRIV INT. Trivial Internal Response Time in seconds.The average response time for transactions thatcomplete with one and only one drop from Q1 and nodrops from Q0, Q2, and Q3.
This is from TOTALTTM for the TRIV field on the RTM TRANSACT
screen.
TVR. Total to Virtual Ratio. This is the ratio of totalprocessor util ization to virtual processor util ization.This is calculated by:
TOTALTOTAL EMUL
TVR (H). See TVR. Total to Virtual Ratio measuredby the hardware monitor. This is calculated by:
TOTAL (H)
TOTAL EMUL (H)
Users. The number of virtual machines logged on tothe system during a measurement interval that areassociated with simulated end users. This includesactive and inactive virtual machines but does notinclude service machines.
UTIL/PROC. Per processor utilization. This iscalculated by:
TOTALPROCESSORS
UTIL/PROC (H). Per processor utilization reported bythe hardware.
For 9221 processors, this is calculated by:
TOTAL (H)
PROCESSORS
For 9121 and 9021 processors:
This is taken from the %CPU column in the SYST-ALL line of the
REPORT file.
VIO RATE. The total number of all virtual I/Orequests per second for all users in the system.
This is from the ISEC field for the total RTM time interval (<-) on the
RTM SYSTEM screen.
VIO/CMD . The average number of virtual I/Orequests per command or job for all users in thesystem. This is calculated by:
For the FS7F, FS8F, and VSECICS workloads:
VIO RATE
ETR (T) )
For the PACE workload:
60 × VIO RATE
ETR (H)
Glossary of Performance Terms 189
Glossary of Performance Terms
Virtual CPU (I). Virtual CPU time, in seconds, used bya given virtual machine. This is calculated fromINDICATE USER data obtained before and after theactivity being measured. The value shown is finalVTIME - initial VTIME.
Virtual CPU (QT). Virtual CPU time, in seconds, usedby a given virtual machine. This is calculated fromQUERY TIME data obtained before and after theactivity being measured. The value shown is finalVIRTCPU - initial VIRTCPU.
VIRT CPU/CMD (V) Server. Virtual processor time, inmilliseconds, run in the designated server(s) machineper command. This is calculated by:
1
(V_T ime × ETR (T))× ∑
s ∈ server class
V i r t _CPU_Secs s
Vir t_CPU_Secs and V_Time are from the Resource Util ization by User
Class section of the VMPRF reports.
VM Mode. This field is the virtual machine setting(370, XA or ESA) of the VSE guest virtual machine inPACE and VSECICS measurements.
VM Size. This field is the virtual machine storagesize of the VSE guest virtual machine in PACE andVSECICS measurements.
VMPAF. Virtual Machine Performance AnalysisFacility. A tool used for performance analysis of VMsystems.
VMPRF. VM Performance Reporting Facility. Alicensed program that produces performance reportsand history files from VM/XA or VM/ESA monitor data.
VSCSs. The number of virtual machines runningVSCS external to VTAM during a measurementinterval.
VSE Supervisor. This field is the VSE supervisormode used in a PACE or VSECICS measurement.
VTAMs. The number of virtual machines runningVTAM during a measurement interval.
V = F . Virtual equals fixed machine. A virtualmachine that has a fixed, contiguous area of realstorage. Unlike V=R, storage does not begin at page0. For guests running V=F, CP does not page thisarea. Requires the PR/SM hardware feature to beinstalled.
V = R . Virtual equals real machine. Virtual machinethat has fixed, contiguous area of real storagestarting at page 0. CP does not page this area.
V = V . Virtual equals virtual machine. Defaultstorage processing. CP pages the storage of a V=Vmachine in and out of real storage.
WKSET (V). The average working set size. This isthe scheduler ′ s estimate of the amount of storage theaverage user will require, in pages.
This is the average of the values for WSS in the VMPRF Resource
Util ization by User report, (found in the Sum/Avg l ine).
WKSET (V) Server. Total working set of a relatedgroup of server virtual machine(s). This is calculatedby:
∑s ∈ server Logged Users
A v g _WSS s
Avg_WSS is found in the Avg WSS column in the VMPRF Resource
Util ization by User Class report for each class of server.
WRITES/SEC. The number of page writes per seconddone for system paging.
This is taken from the NSEC column for the PAWRIT field for the total
RTM time interval on the RTM SYSTEM screen.
XSTOR IN/SEC. The number of pages per secondread into main storage from expanded storage. Thisincludes fastpath and non-fastpath pages. It iscalculated by:
Fastpath_In + NonFastpath_In
Fastpath_In and NonFastpath_In are taken from the NSEC column for
the XST_PGIF and XST_PGIS fields for the total RTM time interval on
the RTM SYSTEM screen.
XSTOR OUT/SEC. The number of pages per secondwritten from main storage into expanded storage.
This is taken from the NSEC column for the XST_PGO field for the
total RTM time interval on the RTM SYSTEM screen.
XSTOR/CMD. The number of pages read into mainstorage from expanded storage and written toexpanded storage from main storage per command orjob. This is calculated by:
For the FS7F, FS8F, and VSECICS workloads:
XSTOR IN/SEC + XSTOR OUT/SEC
ETR (T)
For the PACE workload:
60 × XSTOR IN/SEC + XSTOR OUT/SEC
ETR (H)
190 VM/ESA 2.1.0 Performance Report