57
1 1 DB2 UDB for z/OS V8 Migration: A Triathlon Event Joan Keemle / John Deere Joan Keemle / John Deere DB2 UDB V8 is a momentous release. The migration process itself has significantly changed from a single process to a three-phase procedure with fallback limitations and implications. Don’t be intimidated. This presentation will review the V8 Migration experience at John Deere using a Triathlon analogy, to help you prepare for your big event. This is an Intermediate session for “seasoned DB2 athletes”. We will not discuss every detailed step of migration, but will focus on what is different or unusual. Details are included in the Notes sections for reference. This presentation topic was delivered at IDUG in 2005 and 2006. Guess what?! At time of publication, we’re still in the process of migrating! Come and find out WHY. We’ve learned even more, and I’ll share our experiences regarding problems and compatibility issues. Disclaimer: This information is based on the DB2 V8 Migration experience at John Deere, and is specific to our environment. Your requirements and results may vary, depending on your environment.

A Triathlon Event

  • Upload
    tess98

  • View
    214

  • Download
    6

Embed Size (px)

Citation preview

Page 1: A Triathlon Event

1

1

DB2 UDB for z/OS V8 Migration:

A Triathlon Event

Joan Keemle / John Deere

Joan Keemle / John Deere

DB2 UDB V8 is a momentous release. The migration process itself has significantly

changed from a single process to a three-phase procedure with fallback limitations and

implications. Don’t be intimidated. This presentation will review the V8 Migration experience

at John Deere using a Triathlon analogy, to help you prepare for your big event.

This is an Intermediate session for “seasoned DB2 athletes”. We will not discuss every

detailed step of migration, but will focus on what is different or unusual. Details are included

in the Notes sections for reference.

This presentation topic was delivered at IDUG in 2005 and 2006. Guess what?! At time of

publication, we’re still in the process of migrating! Come and find out WHY. We’ve learned

even more, and I’ll share our experiences regarding problems and compatibility issues.

Disclaimer: This information is based on the DB2 V8 Migration experience at John Deere,

and is specific to our environment. Your requirements and results may vary, depending on

your environment.

Page 2: A Triathlon Event

2

2

Agenda

• Introduction

– Business value

– DB2 “Classic” Environment

• Preparation

– Phased Approach

– The V7 Environment

– Focusing on V8

• Scheduling

• Communication

Page 3: A Triathlon Event

3

3

Agenda

• Migration Pre-Work

• Migration: Compatibility Mode

– Problems

– Fallback

– Remigration

• Migration: Enabling New Function Mode

• Migration: New Function Mode

• The “Fourth” Phase: Committed to NFM

• Summary

Page 4: A Triathlon Event

4

4

Business Value for John Deere

• Point-in-Time Recovery

• Java performance improvements

• Accounting Roll-up

• HR SAP Upgrade

• Single-Version License Charge– QMF, too!

• Supports Consumption Reduction Efforts

• QPP / ESP - September 2003

Business Value for John Deere

These are the items that compelled us to participate in the QPP program, and to pursue migration early on.

PIT Recovery: In December 1999, eight companies running SAP on DB2 z/OS met with IBM and SAP in Walldorf, Germany. The discussions centered on issues that global companies were facing running SAP on the z/OS platform. One of the main issues this group identified was the need for a more robust backup/recovery solution at the DB2 system-level instead of the DB2 object level.

Java Performance Improvements: multi-row select and insert, among others.

Accounting Roll-Up: Our usage-based chargeback system has experienced difficulties with dramatically increasing accounting records, due in part to Java usage. We helped design and plan to leverage the Accounting roll-up capabilities to maintain and manage our chargeback system.

HR SAP Upgrade requires DB2 V8.

Single-Version License Charging. With our SAP DB2 systems going to V8, we had to migrate our DB2 “Classic” or non-SAP systems also, or pay for support of 2 versions. Who wants to pay for support of 2 versions?! Don’t forget, QMF is a feature of DB2. This means not only should all DB2 V7 systems be migrated to V8, but any QMF V7 systems also need to be migrated to QMF V8 to avoid a dual license charge for DB2.

Supports Consumption Reduction Efforts: V8 has many improvements in SQL, utilities, and performance that support cost and consumption reduction efforts. For example, optimizer enhancements for Stage 1 predicate evaluation instead of Stage 2.

John Deere joined the V8 QPP / ESP late in 2003.

Page 5: A Triathlon Event

5

5

DB2 “Classic” Environment

at John Deere

• Non-SAP

• Non-Data Sharing

• 21 Systems

• 8 LPARs

• 4 Physical Footprints

• BMC Tools for Change Management

• Omegamon for Performance

We refer to the legacy or non-SAP DB2 environment as the “Classic” environment at John

Deere. We are not using Data Sharing at present, although our DB2 SAP environment is.

We support 21 systems across 8 logical partitions on 4 physical footprints. We use BMC

tools for change management and Omegamon for performance.

Page 6: A Triathlon Event

6

6

DB2 “Classic” Environment

at John Deere• Cross-system Connectivity

• TCP/IP

• DRDA

• Private Protocol

• SMS

• DB2 Connect

• Information Integrator

• Universal Driver – Type 2 for WAS on z/OS

– Type 4 for remote clients

There is a lot of cross-system connectivity among our applications. We use TCP/IP and

DRDA, but we have a lot of older applications still using Private Protocol. Since Private

Protocol support is deprecated, we have plans to pursue conversion to DRDA after V8

migration. Based on our migration experience, we are accelerating Private Protocol to

DRDA conversion plans (more to come on this topic).

We use SMS and storage groups for DB2 user data storage.

Some applications use DB2 Connect gateway for connectivity, while others leverage

Information Integrator (V8 “Datajoiner”) for heterogeneous database access.

We implemented the Universal driver (JCC driver) in early 2005.

Page 7: A Triathlon Event

7

7

DB2 “Classic” Environment

at John Deere• IMS V8, V9

• CICS TS 2.3

• z/OS 1.7

• COBOL 2, Enterprise COBOL

• Java

• WAS on z/OS, Linux

• Stored procedure usage

• DB2 UDB for LUW

• Oracle, SQL Server

• QMF 8.1 Compatibility Mode

Applications are a mix of traditional IMS, CICS, COBOL 2 and Enterprise COBOL, and a lot of Java. We have Websphere Application Server running on z/OS as well as Linux. There is a lot of stored procedure usage. The majority of stored procedures are COBOL, but we are seeing growth of SQL procedures and Java. We use DB2 UDB, and we also have some Oracle and SQL Server. We’ve migrated to QMF 8.1 Compatibility Mode.

When upgrading to CICS TS 2.3, we discovered CICS SMF records include DB2 time. Our chargeback routines required adjustment for this.

When we started the project, we were running z/OS 1.5. We’ve since upgraded to z/OS 1.6 and then z/OS 1.7. After our z/OS 1.6 upgrade, we started getting 00F30013 (unable to connect to DB2) messages. Starting with z/OS 1.6, DSNR class is active by default. With the DSNR class active, RACF profiles are needed in the DSNR Class to control access. Refer to z/OS 1.6 Migration (from 1.5), Section 18.1.1, Add RACF profiles in the DSNR class to protect all DB2 subsystems: http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/E0Z2M12B/18.1.1?SHELF=E0Z2BK51&DT=20050322113016

With z/OS 1.7, the OS/390 C Compiler member and library is no longer delivered. Any reference to the old compiler and library names need to be changed: CHANGE CBCDRVR -> CCNDRVR CHANGE DDNAME SCBCCMP -> SCCNCMP

Compatibility of QMF and DB2 is documented on the QMF support site: http://www.ibm.com/support/docview.wss?rs=89&context=SS9UMF&dc=DB520&q1=db2+qmf+v8&q2=z%2fos&uid=swg21201944&loc=en_US&cs=utf-8&lang=en

Page 8: A Triathlon Event

8

8

Phased Approach

• Limit risk

• Toleration before Exploitation

• Compatibility Mode: Swim

• Transition 1: Test, Test, Test

• Enabling New Function Mode: Bike

• Transition 2: Stability

• New Function Mode: Run

We’ve experienced problems critical enough to warrant fallback in past migrations. We want

to limit risk, and we need the ability to fallback if necessary. We took a phased and

cautious approach to V8 Migration. Our approach is one of “toleration before exploitation”.

First, we’ll get our feet wet by migrating to Compatibility Mode.

We’ve encouraged the majority of user application testing to take place after migration to

Compatibility Mode.

When comfortable we’ve successfully completed at least a full monthly business cycle and

addressed all issues or problems in the test environment with V8 CM, we’ll begin to migrate

production to V8 CM.

After we complete production migrations to V8 CM and stability continues, we’ll begin

migrating test to ENFM, then production to ENFM.

As the last production systems complete ENFM, we begin taking test, then production

systems to NFM.

When all systems are NFM, we will change our system default to NEWFUN=YES.

Page 9: A Triathlon Event

9

9

Preparation: The V7 Environment

• Catalog Reorgs

• System Packs

– Eliminate extents

– 20% freespace on packs

• Buffer Pools

– BP0 is Catalog

– BP8K0, BP16K0, BP32K allocated

• TEMP Database

– 8KB and larger Tablespaces

Preparation is key. We wanted our environment to be in the best shape it can be going into the big event.

We recently reorganized all catalogs.

We adjusted all of our system packs where catalog and directory datasets reside. We adjusted placement of datasets such that extents were eliminated and left a sizeable amount of 20% freespace on each of the packs.

We have 8 allocated buffer pools. BP0 contains the Catalog and Directory. The new required buffer pools BP8K0, BP16K0, and BP32K are allocated. In V8, several catalog tablespaces will use the new bufferpools:

•Several tablespaces use BP8K0

•SYSSTATS uses BP16K0

•SYSALTER uses BP32K.

We are not using Hiperpools or Dataspaces.

The TEMP database has already been defined to support static scrollable cursors. In V8, DB2 requires a tablespace of 8KB page size or greater in the TEMP database to support Declared Temporary Tables.

Page 10: A Triathlon Event

10

10

Preparation: The V7 Environment

• DB2 “Classic” CCSID Check: – EBCDIC CCSID ===> 37

– ASCII CCSID ===> 819

– UNICODE CCSID ===> 1208

• DB2 SAP CCSID Check: Conversion Required!

• II13695 V7-V8 Fallback

• Fallback/Toleration Maintenance

• BSDS datasets expanded

• Check with Vendors – software release levels

• Stable V7

Our CCSIDs are set up properly for our environment in DB2 “Classic”. Our DB2 SAP team had to

convert. They referred to SAP OS Notes 660501 and 679694 and contacted IBM for assistance.

These links provide some information about Unicode:

ftp://ftp.software.ibm.com/software/db2storedprocedure/db2zos390/techdocs/unicodep1.pdf

ftp://ftp.software.ibm.com/software/db2storedprocedure/db2zos390/techdocs/unicodep2.pdf

The Info APAR II13695 is the DB2 V7.1 MIGRATION/FALLBACK INFO APAR TO/FROM DB2 V8.1

AND UPGRADING R710.

We review this on a regular basis, looking for any changes or updates. This lists required Unicode

conversion services for V8.

We included the fallback/toleration APAR early in our V7 maintenance.

We increased the underlying VSAM dataset size of all bootstrap datasets. We plan to expand in V8,

to eventually leverage the increased number of archive log datasets. (Number of active log datasets

can be increased, also.)

We checked with all vendors regarding toleration and exploitation levels of software and tools for

usage with DB2 V8. We had to upgrade some tools to a new release for toleration of V8, so

scheduling and coordination took this into account.

Our V7 environment was very stable before we started V8 migrations. This is very important. We

did not want to be troubleshooting V7 problems, and potentially introducing new issues with V8

migration!

Page 11: A Triathlon Event

11

11

Preparation: Focusing on V8

• Review new & changed ZPARMs

• Build checklist in Spreadsheet

• Build skeletal JCL for all jobs

• Build skeletal procedures

• Schedule / Apply Vendor Tool Maintenance

• Implement V8 ERLY Code & IPL

• Develop Test Plan

We reviewed all new and changed ZPARMs for V8. We made a special note of those we

want to tailor instead of default. We also reviewed and updated compiler libraries used by

DSNH. We recommend comparing the new V8 ZPARM member against the V7 ZPARM

member to rule out any ‘surprises’.

We use a spreadsheet for all migrations and maintenance. Each step is a separate row, and

each DB2 subsystem is a separate column across the spreadsheet.

We build “skeletal” JCL and procedures (DBM1 and IRLM) for all jobs to be executed

against each system. We have a CLIST we execute in batch that copies and tailors each of

these for the specific system as we prepare for migration. All JCL tailoring is done ahead of

time.

We started vendor tool upgrades required for V8 toleration.

DB2 V8 ERLY code requires an IPL to take effect. We scheduled and installed ERLY code

on all LPARs ahead of migration.

Our test plan is ever-changing. We expand it with each new release of DB2. Details of our

test plan are on following slides and notes.

Page 12: A Triathlon Event

12

12

Test Plan

• Detailed test scenario in SYSPROG environment

– IBM IVP

– other things we leverage / had difficulty with at John Deere

– V8 CM, ENFM – V7 IVP - Beware the Samples!

– V8 NFM – V8 IVP

• Test scenario executed at every mode of migration and fallback:

– Pre-Migration

– Compatibility Mode

– Fallback to V7

– ENFM (after SYSDBASE conversion)

– NFM

– Fallback to ENFM

– Return to NFM

IVP - Part of the IVP includes the samples: DSNTEP2 and DSNTIAUL. In V8 IVP, these have new function. If you attempt to execute the V8 IVP while in V8 CM or ENFM, you will see SQLCODE -4700 ATTEMPT TO USE NEW FUNCTION BEFORE NEW FUNCTION MODE. Be sure to use the V7 IVP while in V8 CM and ENFM. Once you are in V8 NFM, use the V8 IVP and the new version of these modules.

Other things we tested include:

•Image Copy, Delete, Recovery and Rebuild Index Scenario

•DSN1COPY with OBID translation

•QMF: interactive and batch; Import, Export, Save, Draw

•SAS connectivity to DB2 and Information Integrator

•Connectivity to/from DB2 UDB

•Stored Procedure execution from Java client

•Stored Procedure Builder

•Visual Explain V8

•ODBC Connectivity

•Declared Global Temp Tables

•V7 � V8 Connectivity Compatibility

•DRDA

•Private Protocol

•Automated thread cancellation policy

•Resource Limit Facility (reactive)

•BIND with COPY option

•Deere Usermods: Image copy process, Local Date routine, Sign-on and Authorization exits, CLISTs, other batch programs

•User Defined Functions

•Scalar Functions

•LOAD Utility with large tape input, negative values input

Page 13: A Triathlon Event

13

13

Test Plan

• Execute with each new maintenance round

• Expand based on problems encountered

• Other teams test their products and tools

We rely on other teams to test products and tools they use and support:

Performance tools

DBA tools

Query tools

Oracle Gateway Connectivity

Developer Tools

Enterprise COBOL with co-processor and pre-compiler

RMDS

Charge-back Process

Page 14: A Triathlon Event

14

14

Scheduling

• Timing– FY-End, CY-End, Month-End, Spring Planting, Fall

Harvest, Holidays, Factory Shutdowns, infrastructure changes, major implementations

– Window

• Physical Footprints – CPU Utilization

• Crash & Burn, Test, Production– Always have “like” environment

– Some systems need to closely follow others

– Some systems should migrate before others

• Team Relay / “Wave” start

Timing is a big factor. There are many events to plan ‘around’. There are times we avoid

introducing change due to increased processing requirements. We try to avoid Holidays and

shutdowns due to less staff on hand. Spring planting and Fall harvest are to John Deere, as

Christmas is to Retail. These are high activity periods where we have little or no tolerance

for instability.

We also coordinate with other infrastructure changes: hardware upgrades, operating system

upgrades, major application implementations.

We have a 2-to-8 hour window for change implementation that varies by system and time of

year.

Due to presumption of increased CPU utilization with V8, we wanted to avoid

“overwhelming” any single physical footprint with more than one V8 implementation at a

time. Our plan incorporated this.

It is vital to always have a “Like” environment, whatever version and mode you are

supporting. During migration, there can be as many as 4 environment types:

1– V7

2 – V8 CM or V7 Fallback

3 – V8 ENFM

4 – V8 NFM

We found it advantageous to migrate most systems to V8 CM before going forward with

ENFM/NFM. This greatly reduces the complexity of environments to support, and reduces

risk of compatibility issues.

Page 15: A Triathlon Event

15

15

Communication

• Infrastructure – change planning meetings

• Business partners – weekly meetings

• Technical Staff – list servers

• Education – Transition course planning

• Schedule change tasks in Problem/Change

Management System for visibility

• Plan for Problem Reporting

• Informational Web Page

Communication is our friend! It’s important to get the word out early and often about the

status of V8, and to address user concerns and issues.

We meet regularly to discuss and coordinate changes among infrastructure components.

We meet weekly via phone with key business partners to discuss the status of migration and

address any problems, issues, and concerns. DBA staff is primarily decentralized and

dedicated to various business units.

We issue technical messages to list-servers, informing subscribers of migration schedule

and issues.

We have a regimented process for change planning and problem management. All changes

are scheduled in this system well ahead of time for visibility across the enterprise.

We have a plan in place for communication of problems. All calls come into one number,

where a problem ticket is opened and documented. This reduces the chance of more than

one of us working the same issue, and it helps to ensure all problems are formally

documented.

We established an Informational Web page.

Page 16: A Triathlon Event

16

16

Informational Web Page

• Current Status, as of Date

• Common Problems

• FAQ

• DRDA Conversion

• DBD Conversion

• DB2 V8 and COBOL

• DB2 V8 Documentation Links

• Migration Modes

Our Informational Web Page has proven to be very useful as a communication tool. We

include a high-level current status, with as-of Date. We provide a list of common problems,

and a link to a spreadsheet we are using to track known problems and status. We also

include a section for Frequently Asked Questions, DRDA Conversion, DBD Conversion, and

DB2 V8 and COBOL. We’ll go into more detail on most of this in coming slides.

We include documentation links for the IBM DB2 V8 library of manuals, V8 Redbooks, and

links for desktop DB2 V8 Posters.

We explain each of the Migration modes in detail.

Page 17: A Triathlon Event

17

17

Informational Web Page

• Implementation Schedule

• Compatibility with V7, Other V8 Modes

• Incompatibility and Deprecation List

• Test Scenario

• Plan Table Changes

• DB2 V8 Limits

• New V8 Reserved Words

• Visual Explain

We include a link to our Implementation schedule. We explain compatibility across DB2 versions. We provide links to documentation containing Incompatibility and Deprecation List. This includes discussion of the COBOL, Stored Procedures, Private Protocol, DSNTIAR / Get Diagnostics.

We include a link to our Test Scenario document, so anyone can see what has already been tested by our team. We include information about Plan table changes and new DB2 V8 Limits. We provide a list of the new reserved words with V8, and a list of those DB2 objects that contain these words as column names. We provide information about usage and a link to SQL Reference.

We include a link for Visual Explain: http://www.ibm.com/software/data/db2/zos/osc/ve/

You will need to have as a minimum the IBM CAE 8.1 FP6 installed on your machine before executing Visual Explain V8; it will not run on older clients.

Visual Explain Version 8 did not work with our standard configuration. On our clients, we catalog to the node, and the node is cataloged to the server (ie:service name), which then picks up the port number from the services file. When we upgraded to T4 driver, we found we had to change to catalog to port number in order for Visual Explain to work. Visual Explain is a Java application. Version 7 of Visual Explain used Type2 driver (thus standard db2connect gateway). Version 8 of VE is using Type 4 (Universal) driver, thus it will only run with the port number, not the service name.

VE FP6 allows for Service Names to be used instead of requiring port numbers. The fix was redesigned from the original plans so now it will work with the type 4 driver instead of calling the type 2 driver if service name is used.

Page 18: A Triathlon Event

18

18

Frequently Asked Questions (FAQ)

• What can I do to prepare for Migration?

• Will my COBOL 2 Load Modules continue to

execute?

• What if I need to change my COBOL 2 program?

• Do I need to rebind everything?

• While migration is in process, will I be able to stage

to production?

What can I do to prepare for Migration? Bind critical plans and packages EXPLAIN(YES),

Increase SORTNUM on all utilities to 20-25 range, Execute a query to identify packages that

are bound remotely but not locally. (Refer to discussion of PKGLDTOL on page 31. The

query is included there.)

Will my COBOL 2 Load modules continue to execute? Yes. All existing load modules

will continue to execute without any changes. COBOL 2 modules should be linked to run

under Language Environment.

What if I need to change my COBOL 2 program? If changes are needed to fix a problem

or enhance application function, it should be converted to Enterprise COBOL. If this is a

problem or a support issue in the middle of the night, you will be able to use the DB2 V7 pre-

compiler library to get your COBOL 2 compiled as an emergency measure. (This was not

endorsed by IBM; we did this at our own risk. This library and our COBOL 2 compiler

have since been eliminated.)

Do I need to rebind everything? No, you should not have to rebind packages or

plans. However, in some cases, we have seen errors that are resolved by a simple

rebind. If you are executing packages, rebind the package as well as the plan. If you

experience problems, we suggest you try this as a first measure. It is generally

recommended to rebind at some point after a new release to leverage optimizer

enhancements in static SQL. Rebind in CM will pick up the majority of optimizer

enhancements and convert the plan or package to the new expanded V8 format. If you are

only going to rebind once, do it in NFM. DB2 will automatically rebind plans that were

bound prior to Version 2.3.

While migration is in process, will I be able to stage to production? Yes. Regardless of

what mode we are in, you will be able to stage from a DB2 V8 system (any mode) to a DB2

V7 or a DB2 V8 system (any mode), provided you are not using new function and have not

specified NEWFUN(YES) on your pre-compile.

Page 19: A Triathlon Event

19

19

Frequently Asked Questions (FAQ)

• What do I need to test?

• How was the schedule arrived at?

• How can I use new function, once we are in NFM and before we change the system default?

• Do I need to change my Explain PLAN_TABLE?

What do I need to test? We executed IBM-supplied 'Installation Verification Procedures',

and a detailed number of other tests exercising function used by John Deere. Your testing

should emphasize the Compatibility phase more than any other phase. Things we

recommend testing are applications or processes that are especially critical, and/or might be

particularly unusual and not within our test scenario. Sample testing will suffice. A full

regression test of all applications and functions is not necessary.

How was the schedule arrived at? We avoid periods of freeze or extended processing

availability. Our goal is to keep the window small enough to limit problems or behavior

related to software differences across environments, yet large enough to allow for ample

testing and problem discovery. The schedule is tightly coordinated with other Computer

Center infrastructure hardware and software changes throughout the environment.

How can I use new function, once we are in NFM and before we change the system

default? Once we are in NFM, you can use new function in all dynamic SQL. To use new

function in an application program, you need to override the system default of

NEWFUN(NO) by specifying NEWFUN(YES) on your pre-compile.

Do I need to change my Explain PLAN_TABLE? There are new and changed columns in

DB2 V8. However, we recommend that you continue to use your Version 7 PLAN_TABLE

until we are completely converted to Version 8 NFM. The V7 PLAN_TABLE will work with

all modes of V8.

Page 20: A Triathlon Event

20

20

DB2 V8 and COBOL

• Enterprise COBOL conversion project underway

– Their date was beyond our V8 date

• VS COBOL 2 no longer supported by DB2

– But it’s been out of support for years

– Will continue to run in LE

– Convert to Enterprise COBOL if changes needed

– Use V7 Precompiler for emergency

– All DB2 V7 libraries and COBOL 2 Compiler have since

been eliminated

• New DB2 Enterprise COBOL Compile

Procedures provided (Precompiler and

Coprocessor)

We found ourselves ahead of an Enterprise COBOL conversion project. We became the

“driver” of this effort. The Enterprise COBOL conversion project subsequently gained higher

priority and visibility across the company.

The LE (language environment) is our default run-time environment. Many of our

applications are not link-edited with LE, but run under LE. Reasons for not link-editing with

LE are procedural in part: developers use existing JCL or dialogs that don’t use LE. We

also have vendor-supplied, purchased applications that are not link-edited with LE.

In our testing, COBOL 2 modules link-edited without LE continue to run under LE with DB2

V8 without issue. The problems we’ve had are associated with ‘mixing’ LE and non-LE

modules at run-time. For example, a module linked with LE calling a module linked under

COBOL 2 will abend.

We provide new compile procedures for Enterprise COBOL, using both pre-compiler and

coprocessor. This ensures developers have a working example with all the proper libraries

and parameters.

The V7 precompiler library and the Cobol 2 compiler have been eliminated.

Page 21: A Triathlon Event

21

21

Compatibility with V7,

Other V8 Modes

• NEWFUN (YES) or (NO) is in two places:

– DSNHDECP, system default.

– PARM.PC=(‘NEWFUN(NO)’)

• DB2 V8 (any mode) � DB2 V7

• DB2 V8 (any mode) � DB2 V8 (any mode)

provided you are not using new function and

have not specified NEWFUN(YES) on precompile.

There is a lot of confusion about compatibility once we are in the process of V8 migration. We wanted to clarify the compatibility issue. Load modules and DBRMs are typically staged from test to production, and bound. As long as the application doesn’t use new function or specify NEWFUN(YES), it will be compatible.

Once we are entirely converted to NFM, we will change our DSNHDECP system default.

Page 22: A Triathlon Event

22

22

Minor Compatibility Issues

Staging from DB2 V8 � V7• Beware the “Not” Sign Problem!

– ^ (caret) or ¬ (sideways-L)

– V7 may get SQLCODE -104

– V7 PK15072 / UK09292 (F512) may help

• New additional timestamp format

– Default Java format

– Blank is the separator:

'2003-01-01 00:00:00.000000‘

– V7 returns SQL0181N

• Applications should check both SQLCODE -180 and -181

– SQLCODE -181 in V7, some cases SQLCODE -180 in V8

We experienced a compatibility issue associated with usage of the ‘not’ sign character, ^ (caret) or ¬ (sideways-L). The V8 precompiler parses SQL in Unicode, so it converts the not sign ^ (x’5F’) from source code into Unicode, then back to EBCDIC (CM and ENFM only) not sign ¬ (x’B0) in the DBRM. This will bind just fine on the same release of DB2. However, it may result in SQLCODE -104 (Syntax error) when a V8 DBRM is bound on a V7 system. See the warning in the SQL Reference, Chapter 2 Language Elements, section on Predicates, subsection Basic Predicate at this link:

http://publib.boulder.ibm.com/cgi-bin/bookmgr/BOOKS/dsnsqj12/2.22.1?DT=20050325102208

SQL can be executed on a V7 system to find the plans and packages that may be susceptible to this problem, if re-compiled on V8.

The Application Programming and SQL Guide, Section 1.1.4 Selecting Rows Using Search Conditions: WHERE; Table 2: Comparison Operators Used in Conditions, shows the valid list of comparison operators. The not signs ^ and ¬ are not valid according to the list. DB2 V7 APAR PK15072 / UK09292 may help this problem.

We received reports of problems with dynamic Java applications inserting a particular timestamp

format. On our V8 CM test system, this was working just fine. When the application staged to V7

production, they received an error:

SQL0181N The string representation of a datetime value is out of range.

In DB2 V8, there is a new allowable timestamp format. It is the default Java format. The DB2 V8

Release Guide discusses the new format: http://publib.boulder.ibm.com/cgi-

bin/bookmgr/BOOKS/dsnrgj12/2.2.18?ACTION=MATCHES&REQUEST=timestamp+format&TYPE

=FUZZY&SHELF=&DT=20050325155616&CASE=&searchTopic=TOPIC&searchText=TEXT&sear

chIndex=INDEX&rank=RANK&ScrollTOP=FIRSTHIT#FIRSTHIT

Dynamic SQL in Java forces you to pass in a String. The timestamp gets converted to a string in

this new format by default. This was always a problem in DB2 V7, requiring application

intervention. DB2 V8 allows for the Java default format.

Static SQL (SQLJ) allows you to pass the true Java class/object as a host variable. There is no

conversion to the String, so this wasn’t and isn’t a problem for SQLJ.

Page 23: A Triathlon Event

23

23

Migration Pre-Work

• Batch CLIST – copy and tailor JCL

• DSNTESQ

• Stage Proclib members

– DBM1 – MEMLIMIT Parm

– IRLM – MEMLIMIT Parm

– Enterprise COBOL procs

• DSNTIJUZ – new module DSNHMCID

• DSNTIJVC

– V8 no longer contains conversion for DSNTBIND and

DSNTPSMP. We add them.

We execute a batch CLIST to copy and tailor all JCL for a specific system.

DSNTIJPM (DSNTIJP8 shipped with V7)

DSN1CHKR

DSN1COPY with CHECK option

CHECK all Catalog and Directory Indexes

DSNTESQ steps for Catalog health check

DSNTIJIN defines VSAM linear datasets for new catalog objects

Stage new, changed proc lib members. For IRLM, PC and MAXCSA are no longer used, but the parameters have to be there for compatibility. The values for PC and MAXCSA don't matter. The MEMLIMIT parameter for DBM1 address space must be 4TB or greater. If this is changed to something less, it is ignored. (Refer to PK03680 for more information.)

DBINFO Query (Migration Step #20 in Installation Guide)

DSNTINST - install CLIST in MIGRATE

Tailor ZPARMs

DSNTIJUZ – assemble and link ZPARMs and DSNHDECP. DSNHMCID is the new data-only load module assembled and linked with ZPARMs.

DSNTIJVC - merges CLISTs into one library and converts to variable block format. In V7, this included REXX execs DSNTBIND, DSNTPSMP. In V8, they are no longer included. Rather than add another dataset to our staging process and to all systems, we add them to this JCL and continue to include them in our staged CLIST dataset. IBM recommends both datasets get staged.

Create JCL for change implementation Notify automation to suspend some jobs / alerts

Page 24: A Triathlon Event

24

24

Page 25: A Triathlon Event

25

25

Migration: Compatibility Mode -

Highlights

• Start DB2 V8: expect normal errors– DSN9023I, DSNL008I, DSNT500I

• DSNTIJTC - CAT MAINT– DB2 “Classic”: 00:00:25 CPU, 00:01:32 elapsed

– DB2 SAP: 00:00:13.04 CPU, 00:05:22 elapsed

• Bind System Plans and Packages

• Grants for new stuff– GRANT EXECUTE ON PACKAGE DSNUT810.DSNUGSQL

– GRANT EXECUTE ON PLAN DSNTIA81

– GRANT SYSOPR to RACF groups

– GRANT ARCHIVE to RACF Groups

We execute commands from console, so everything gets written to syslog.

-DIS UTIL(*)

-TERM UTIL(*)

-DIS THREAD(*) TYPE(INDOUBT) and recover them

-DIS DB(DSNDB01) SPACE(*) LIMIT(*) RESTRICT

-DIS DB(DSNDB06) SPACE(*) LIMIT(*) RESTRICT

Check for error page ranges or deferred restart states and recover, if applicable

Image Copy Catalog and Directory SHRLEVEL REFERENCE

Stop TSO (Our common TSO logon procedures allocate datasets for ISPF panels,

messages, and CLISTs. This is the best way for us to free up these file allocations.)

-ARC LOG to truncate and archive Log and BSDS

Stop DB2 V7

Execute JCL to switch datasets

Start DB2 V8 – expect errors: DSN9023I on START RLIMIT, DSNL008I for DDF, DSNT500I

Resource Unavailable on DSNDB06

Start TSO

DSNTIJTC - CATMAINT

Execute JCL to BIND all system plans and packages

GRANTs for new stuff (In V8, the console operator no longer has SYSOPR by default)

-STA DDF

-STA RLIMIT to start resource limit facility

Page 26: A Triathlon Event

26

26

Migration: Compatibility Mode -

Highlights

• DEEREIX – Drops, creates consistent user-defined Indexes on Catalog

• DSNTIJSG

• DSNTIJCC / DSNTIJCM

• Verify Views and regenerate

• BIND DSNAOCLI – Bind CLI against all V7 systems one time

• Vendor tool update JCL

• Basic Verification

• -DIS GROUP DETAIL shows release and mode

problems.ppt

We execute a job called DEEREIX to clean up and create consistent user-defined Indexes on Catalog objects.

CHECK all Catalog and Directory Indexes

DSN1CHKR

DSN1COPY with CHECK option

Image Copy DB2 V8 Catalog and Directory (includes 2 new tablespaces: SYSALTER, SYSEBCDC)

DSNTIJSG

DSNTIJCC / DSNTIJCM What you run depends on whether you’ve enabled Management Clients Package, and your level of service. Be careful not to lose user data by dropping tables. Refer to the Program Directory for DB2 Management Clients Package, GI10-8567. Also refer to the following documentation for migrating to z/OS Enablement Version 8:

http://www.ibm.com/support/docview.wss?rs=64&context=SSEPEK&q1=MIGRATING+JDB771D+JDB881D&uid=swg27006136&loc=en_US&cs=utf-8&lang=en

Verify views and regenerate (Migration Step #25 in Installation Guide)

BIND DSNAOCLI packages to all V7 subsystems one time

Execute any necessary vendor tool update JCL. We executed an upgrade job for BMC and bound plans and packages for QMF.

We execute some basic system verification. We run a job to create Database, Tablespace, Table, insert data, copy using LISTDEF and TEMPLATE, issue remote DRDA query.

-DIS GROUP DETAIL can be used whether or not you are Data Sharing. This returns information about DB2, including release level and mode (C,E, or N).

Page 27: A Triathlon Event

27

27

Migration: CM – Issues / Problems

• Additional conversions needed

– 37 – 850

– 1208 – 850

– 1140 – 1208

– D UNI,ALL on console

• BIND DSNCLIMS SQLCODE –189

– Use SQLERROR(CONTINUE)

• Enterprise COBOL

– Coprocessor doesn’t expand SQLCA

– Include CCSID in SQL parameters

In early testing, we found we needed to add conversions for 850, which is used on our Unix

server for Information Integrator. We also had to add the conversion for 1140. This is the

Enterprise COBOL default CODEPAGE. Without this, we had Enterprise COBOL

applications “break” when we migrated to DB2 V8.

To see what conversions are in place in your shop, execute the display command on the

console: D UNI, ALL

Conversions can be added dynamically using the SET UNI command (without IPL or DB2

V8 restart). You may have to adjust the real storage to handle an active and inactive

conversion list.

Bind of DSNCLIMS package returns SQLCODE –189. We were directed by IBM to use

SQLERROR(CONTINUE).

Enterprise COBOL with coprocessor doesn’t expand the SQLCA in source code listing. This

is a problem for developers if they are using debugging tools using source code, as they

have no visibility to SQLCA unless it is moved into other working storage fields.

We discovered we had to include the CCSID(37) in SQL parameters for compile. Without it,

we were getting CC 16 on Enterprise COBOL compile.

Page 28: A Triathlon Event

28

28

Migration: CM – Issues / Problems

• UNION Metadata differences in V8– PK03946 / UK03567 (F506)

– New ZPARM UNION_COLNAME_7

• FOR BIT DATA is OK!

• DSN1COPY with CHECK on “old” systems– Before CM migration => CC0– After CM migration => CC8

– out of sequence pages, unexpected page numbers – PGNUM / PGLOGID V4 format

– Image copy in V8 will clean this up

Prior to V8, the result column name in a SQLNAME field of the SQLDA for a statement involving a UNION reflected the column name or label of the first

sub-query in the statement. In V8, if labels are used, DB2 returns the label

of the column in the first sub-query. If labels are not used, the result column

name will only be returned if the column name is the same across all sub-

queries in the statement. This can cause problems for Java programs if they rely on this metadata to string and parse results. Refer to APAR PK03946 /

UK03567 and new ZPARM UNION_COLNAME_7 for a means of

transitioning to the new behavior.

Some catalog data is converted to FOR BIT DATA in V8 CM. For example, SYSIBM.SYSVIEWS TEXT column is converted to FOR BIT DATA. This

information is legible through SQL SELECT from an EBCDIC terminal

emulator, but is not legible from an ASCII Client. This is normal and to be

expected.

Page 29: A Triathlon Event

29

29

Migration: CM – Issues / Problems

• Virtual Storage– ZPARMs, Trace for consumption tracking

– II10817 Virtual Storage Info APAR

– Real Storage – OA17114

• Access Path Changes / Differences – ZPARM STATROLL=YES

– Visual Explain Statistics Advisor

– REOPT(ALWAYS)

– OPTHINTS

We experienced some spikes in virtual storage consumption, but unfortunately did not have

enough information to diagnose the problem. We adjusted ZPARMs and trace parameters

to better track virtual storage consumption.

We changed ZPARM STATTIME to 5, the new default. We changed SYNCVAL from NO to

0 to align on the hour. We started traces to capture IFCID 217 (Global trace class 10) and

IFCID 225 (this moves from Statistics Trace Class 6 to Class 1 with PQ99658 / UK04394 in

F506).

We’ve seen some cases where the access path was different between V7 and V8. One

specific case involved a partitioned tablespace with some empty partitions. With

STATROLL=NO, the partition-level statistics are not aggregated. When this was changed to

STATROLL=YES, we had an improved access path.

We are using the Visual Explain Statistics Advisor to help identify the statistics that should

be gathered for specific queries. We have found this to be very helpful.

In some cases, simply binding with REOPT(VARS) or REOPT(ALWAYS) has resolved

query issues.

We’ve enabled OPTHINTS ZPARM dynamically on some systems where we had an access

path regression in a critical application. This allowed us to stay on V8 and avoid having to

fallback to V7. We leverage OPTHINTS as a workaround for the anomaly or rare case of

SQL that doesn’t optimize the way we would like, while we investigate the underlying issue.

Page 30: A Triathlon Event

30

30

Migration: CM – Issues / Problems

• DB2- Managed Stored Procedures

– Convert to WLM!

• Private Protocol

– Mixing V7 and V8 in distributed hop

PK23866 / UK15046 (F606) and JCC Driver 2.8.59

– Convert to DRDA!

Tool to assist with conversion available on DB2 Examples TradingPost:

http://www.ibm.com/support/docview.wss?rs=64&uid=swg27008509

We discovered some problems with DB2-managed stored procedures that have since been resolved with maintenance. IBM reminds us that support

for DB2-managed stored procedures is deprecated in V8, and will be

eliminated in future. The conversion process is fairly simple:

•Link-edit with DSNRLI

•ALTER the procedure WLM environment name

Private Protocol is an ancient, proprietary method of accessing data at other

DB2 z/OS locations using SNA/APPC. While technically it is still supported,

we’ve found it to be somewhat problematic in V8. IBM reminds us that support for Private Protocol is deprecated in V8. Based on our migration

experiences, we are accelerating plans to convert from Private Protocol to

DRDA. For most cases, this can be accomplished by a BIND of the

package at local and remote sites, and BIND of PLAN using PKLIST

containing the local and remote packages.

IBM has recently developed a “tool” to assist with conversion. This consists

of a JCL that executes a REXX routine. Output of the REXX includes

CREATE ALIAS statements and BIND statements.

Page 31: A Triathlon Event

31

31

Migration: CM – Issues / Problems

• If PKGLDTOL=YES in V7– ZPARM is gone in V8

– SQLCODE -805 after V8 Migration (local package not in plan)

• REORG – PK15109 PE– PK32423 fix

We used PKGLDTOL=YES in V7 to tolerate plans with no local packages.

(PKGLDTOL=NO is the default.) This got us around a lot of issues with older plans, and

plans with remote packages but no local package in the PKLIST. The purpose of this

ZPARM was to assist with migrating to V7 (PQ59207). This ZPARM is no longer available

in V8, and plans that were executing prior to migration with no local package received

SQLCODE -805 immediately following V8 Migration to Compatibility Mode.

The following query can be helpful in identifying packages that are bound remotely but not

locally. These packages worked in DB2 V7, but will fail in V8 with SQLCODE -805, because

the local package is not in the plan.

If rows are returned, be sure to bind the identified packages on the local system as well as

the remote system. The plan should include both local and remote packages in the PKLIST:

SELECT A.LOCATION, A.PLANNAME, A.NAME, C.QUALIFIER

FROM SYSIBM.SYSPACKLIST A,

SYSIBM.SYSPLAN C

WHERE A.LOCATION NOT IN ('*',' ',CURRENT SERVER)

AND A.PLANNAME = C.NAME

AND C.QUALIFIER = '???????' (where ??????? = your RACF group id)

AND NOT EXISTS

(SELECT B.PLANNAME FROM SYSIBM.SYSPACKLIST B

WHERE B.LOCATION IN (' ','*',CURRENT SERVER)

AND A.PLANNAME = B.PLANNAME

AND A.COLLID = B.COLLID

AND A.NAME = B.NAME);

Page 32: A Triathlon Event

32

32

Migration: Driver / Client Issues• ODBC CLI Driver V8 <FP6

– Does not distinguish between CM and NFM

– Arrays can get SQLCODE -4700

• ODBC CLI Driver Differences– ODBC CLI Driver V7

� Queries catalog directly for schema metadata

– ODBC CLI Driver V8� Invokes metadata stored procedures

� Reference to ALIAS for remote object fails with SQLCODE -204

– PK23279

• JCC Driver – Beware of V7 at higher maintenance• MS- SQL Server Unexpected data length errors

– Microsoft Hotfix 829016 or 897246

Prior to CLI Driver FP6, the CLI Driver does not distinguish between V8 CM/ENFM and NFM. If Client tools send arrays to reduce network traffic, CLI packaged it up and sent for multi-row insert, which was new function. This resulted in SQLCODE -4700 ATTEMPT TO USE NEW FUNCTION BEFORE NEW FUNCTION MODE. We experienced this problem using Informatica and a lower level of CLI driver.

We experienced problems using the ODBC/CLI V8 driver using aliases for remote objects. The CLI V7 driver queried the target DBMS directly for schema metadata. The CLI V8 driver uses new metadata stored procedures. This allows the driver to issue a standardized call to the appropriate stored procedure, regardless of platform. In the case where an alias is used for a remote object, this results in an SQLCODE -204. This is a problem for us because we have Accounting and Manufacturing applications that leverage desktop tools such as MS-Access and MS-Excel. These applications execute stored queries at specific intervals, such as Month-end and Quarter-end. They reference aliases to access remote objects with 3-part-names. These applications will likely have to be modified, or database target changed. IBM has opened an APAR for this issue: PK23279.

Our migration has taken longer than planned. We’ve had to apply maintenance to V7 systems while V8 Migration continues. We inadvertently placed a higher JCC driver in production than what was running in the development/test environments:

•Production DB2 V7 at Put 0509 - IBM DB2 JDBC Universal Driver Architecture 2.5.76

•Dev/Test DB2 V8 at Put 0508 - IBM DB2 JDBC Universal Driver Architecture 2.5.48

This hasn’t caused any problems, but potentially it could. This is something to be aware of and plan for when applying V7 or V8 maintenance while the V8 Migration project is underway.

After migration to Compatibility Mode, a SQL Server process started receiving errors: OLE DB provider 'MSDASQL‘ returned an unexpected data length for the fixed-length column <column>. The expected data length is <xx>, while the returned data length is <yy>. [SQLSTATE 42000] (Error 7347). This is a documented MS-SQL Server issue:

http://support.microsoft.com/default.aspx?scid=kb%3Ben-us%3B897246

Page 33: A Triathlon Event

33

33

Migration: CM – Summary

• Be prepared to gather doc

– Clear out DAE

– Trace JCL

– SYSMDUMP

– Client or Application Server trace

– V7 AND V8 fix

Clear out DAE (dump analysis and elimination) before migration to avoid suppression of dumps you may need for problem resolution.

We found it handy to have the DRDA trace JCL ready:

-STA TRACE(GLOBAL) CLASS(30) IFCID(180,165) DEST(SMF)

-STA TRACE(STAT) DEST(SMF) CLASS(4)

SYSMDUMP – IBM requested we add this to WLM procs to gather additional documentation for problem resolution. We created a GDG with DISP=(MOD,CATLG,CATLG), which caused additional problems. The dataset isn’t cataloged until close. One instance of the WLM environment would allocate the new GDG. Additional instances of WLM environment would attempt to allocate same GDG, resulting in UDF and stored procedure timeouts (00E79002). An alternative is to use dynamic system symbolic variables, such as &HHMMSS, instead of a GDG. The GDG allocation may have worked if we had used DISP=(NEW,CATLG).

Sometimes it’s necessary to run a trace on the client or application server:Turn on trace: db2trc on -l 32M Recreate problem Dump trace: db2trc dmp trace.dmpTurn off trace: db2trc offFormat trace: db2trc fmt trace.dmp trace.fmt

Make sure the trace hasn’t wrapped. Re-run with larger buffer if it has.

As fixes are identified, it helps to make note of the V7 fix, too, if applicable. Maintenance may be required at some point before migration is completed.

Page 34: A Triathlon Event

34

34

Fallback

• Gather as much doc as you can!

• Fallback ‘like’ environment first

• Recreate Views regenerated in V8

• BMC Toolset Performance

– R192399, IBM APAR PK12389

• IMS S806 Abend at shutdown

– /STOP SUBSYS before fallback

• DSNTIJSG – DSNWZP, DSNTPSMP

• DSNTIJCC / DSNTIJCM – fallback steps

Gather as much documentation as you can before you fallback! You don’t want to

experience the same problems when you re-migrate.

Fallback your “like” environment first to practice.

Views that were regenerated in V8 will need to be dropped and recreated. These are the

views indicated in Migration Step #25.

BMC Toolset performance was very poor after fallback. Original VIEWs created for DB2 V8

"Toleration" result in Tablespace scans. Because of View design, Indexes on the DB2 V8

Catalog Tables cannot be used. Apply BMC Resolution 192399 to address this. IBM APAR

PK12389 addresses the performance issue.

To avoid IMS S806 abend at shutdown (module DSNHMCID not found), issue a /STO

SUBSYS SSID before fallback, and /STA SUBSYS SSID when done.

DSNTIJSG – To keep DSNWZP as WLM instead of SPAS, execute the alter statement:

Alter procedure SYSPROC.DSNWZP External name DSNWZPR;

Drop DSNTPSMP and recreate using V7 syntax.

DSNTIJCC/DSNTIJCM – there are fallback steps that must be completed for Management

Clients Enablement. These are documented in Section 6.6 Fallback to 390 Enablement

Version 7 Procedure of the Program Directory for IBM DB2 UDB for z/OS DB2 Management

Clients Package GI10-8567. Several stored procedures and temporary tables are dropped

and recreated. Data must be unloaded and reloaded to preserve user settings in

DSNACC.UTLISTE. Refer to this for the most recent documentation on Falling back to 390

Enablement Version 7:

http://www.ibm.com/support/docview.wss?rs=64&context=SSEPEK&q1=fallback+jdb771d+j

db881d&uid=swg27006130&loc=en_US&cs=utf-8&lang=en

Page 35: A Triathlon Event

35

35

Remigration

• Clear out DAE

• Take normal steps for maintenance

• Re-do health checks for migration

• DSNTIJSG - DSNWZP, DSNTPSMP

• DSNTIJCC / DSNTIJCM

• -DIS GROUP DETAIL

DSNTINST - install CLIST

DSNTIJUZ to reassemble and link ZPARMs and DSNHDECP

DSNTIJVC for CLISTs

Stage maintenance

Create disk-reader members

Clear out DAE (dump analysis and elimination) if necessary

-DIS UTIL(*), -TERM UTIL(*)

Check Catalog and Directory Indexes, DSN1CHKR, DSN1COPY with CHECK

Image Copy V7

Stop DB2, Stop TSO (Our common TSO logon procedures allocate datasets for ISPF panels, messages, and CLISTs. This is the best way for us to free up these file allocations.)

Call disk-reader to change datasets

Start DB2, Start TSO

Call disk-reader to bind system packages and plans

Check Catalog and Directory Indexes, DSN1CHKR, DSN1COPY with CHECK

Image Copy V8

DSNTIJSG - Alter procedure SYSPROC.DSNWZP External name DSNWZP; Drop DSNTPSMP and recreate using V8 syntax.

DSNTIJCC/DSNTIJCM - Redo steps taken at fallback

Regenerate the views per Migration Step #25

-DIS GROUP DETAIL for warm-fuzzy

Page 36: A Triathlon Event

36

36

Page 37: A Triathlon Event

37

37

•ENFM

Page 38: A Triathlon Event

38

38

•ENFM

(surprise)

Page 39: A Triathlon Event

39

39

Migration:

Enable New Function Mode

• Installation CLIST

– We ignore sizing

• V8CHGJCL

– Deere-written REXX Procedure

– updates JCL dataset sizing and VOLSER based on current

sizes and placement

• -ARC LOG from console

We ignore the sizing on the installation CLIST. Our SMP environment is entirely contained

in an LPAR isolated from test and production. We don’t have access to production systems

through our installation CLIST, We do not keep this information updated in the installation

CLIST, and we’re not using msys or DAS at this time.

After completing the Installation CLIST for ENFM, we execute our own REXX EXEC that

updates JCL dataset sizing based on current sizes of datasets. This changes allocation

from RECORDS to CYL or TRK, and changes VOLSER to current VOLSER allocation.

This is available on the IDUG Insider Code Place.

Image Copy Catalog and Directory

-ARC LOG to truncate and archive log and BSDS

Page 40: A Triathlon Event

40

40

Migration:

Enable New Function Mode

• DSNTIJNE Catalog Conversion– We include shadow datasets for user-defined IXs

• Issues– S04E RC=00E30086 / 00E60820 in ENFM009A

� PK16749 / UK12078 - F603

– J0001.A002 shadow datasets

– Switch phase contention

– SQLCODE -904, RC 00E30305

� Rebind for SYSIBM.SYSDUMMY1

We run a customized version of catalog conversion DSNTIJNE. We have included the

shadow datasets for user-defined indexes. They will be converted, but we can’t ALTER

NOT PADDED until NFM. On our large developer test system, one of these indexes

“swelled up” quite a bit, but we’ve decided to keep the indexes defined and ALTER NOT

PADDED as soon as we’re committed to NFM.

We had a few problems with the execution of DSNTIJNE. We included a TIME parameter to

avoid S322. We had an S878 out of storage condition on conversion steps, so we increased

SORTNUM. We also had a B37 on the COPYDDN(SYSCOPY) for SPT01, and redirected

to tape.

We experienced an abend S04E RC 00E30086, ENFM009A Step RC 8 and 00E60820,

DSNTICPS: Check SYSJAUXA and SYSJAUXB COPY status, which was resolved by

UK12078.

On one of the large systems, we’ve added a second dataset for SPT01. The shadow

dataset had to be added to the DSNTIJNE JCL.

We experienced enqueue problems during the SWITCH phase for SPT01. We did not start

ACCESS MAINT, so the system was available to everyone. Normally this is not a problem,

but we were nearing the end of our window, and applications were becoming active and

connecting. To get through our conversion, we stopped CICS, IMS, DDF, and Omegamon.

We also terminated an outstanding utility that was in a stopped state. This allowed our

conversion to run to completion.

SYSIBM.SYSDUMMY1 is dropped and recreated during conversion, so plans and packages

are invalidated. An explicit REBIND is required.

Page 41: A Triathlon Event

41

41

Migration:

Enable New Function Mode

• DSNTIJNH Halts Conversion

– During testing, halted after SYSDBASE conversion,

executed RUNSTATS, and executed our entire test

scenario again

• Runstats on catalog objects

• Rebind Plans and Packages for SYSDUMMY1

• Capture timings and sizes

crash.ppt

During our testing, we executed DSNTIJNH to halt conversion after

SYSDBASE conversion. We executed RUNSTATS, and executed our entire test scenario again.

When ENFM is complete, we execute runstats on all catalog objects.

We rebind the plans and packages that have a dependency on

SYSIBM.SYSDUMMY1.

It is helpful to capture timings for ENFM to have a good idea of what to expect in production environments.

Page 42: A Triathlon Event

42

42

ENFM Timings & Sizes

• DB2 “Classic”

• DB2 SAP

Timings will be discussed based on latest information.

Page 43: A Triathlon Event

43

43

DB2 “Classic” Environment

at John Deere

• Largest DB2 System Catalog - Number of objects

– 3000 Databases

– 26500 Tablespaces

– 28000 Tables

– 36500 Indexes

– 20700 Plans

– 53000 Packages

Our largest system in terms of DB2 Catalog and Directory size is our major developer test

system. This system is used for all “unit” testing by developers and is the primary

development system. After our system programmer environment, this is the first system to

get maintenance and migrate to new releases. This is our first opportunity for developers to

test.

Page 44: A Triathlon Event

44

44

DB2 “Classic” ENFM

Timings and Sizes

• ENFM– Size Before 10.56GB / 10.9GB After

– 34 Min Elapsed / 11 Min CPU

• SYSDBASE– Size Before .67GB / .74GB After

– 8 Min Elapsed / 3.5 Min CPU

• SYSPKAGE– Size Before 1.42GB / 1.45 GB After

– 4 Min Elapsed / 1.5 Min CPU

Due to problems encountered and self-imposed “freeze” periods for seasonal processing

requirements, our migrations in the DB2 “Classic” environment were delayed. All systems

are V8. We are in process of migrating to ENFM / NFM. At the time of this publication, we

are stable on Put 0512 + many PTFs, Hipers, and a few APARs.

Our DB2 SAP team has completed migration to V8 NFM, so we will look at their experience

and timings also.

Page 45: A Triathlon Event

45

45

DB2 SAP Environment

at John Deere

• DB2 System Catalog - Number of objects

– 12450 Databases

– 17300 Tablespaces

– 35600 Tables

– 42200 Indexes

– 15 Plans

– 323 Packages

This is one of the larger test DB2 SAP systems. It is currently used for training.

Page 46: A Triathlon Event

46

46

DB2 SAP ENFM Timings and Sizes

• ENFM– Size Before 1.95GB / After 2.02GB

– Elapsed 18.13Min / CPU 3.54Min

• SYSDBASE – Size Before .43GB / After .47GB

– Elapsed 11Min / CPU 2.39Min

• EMFM Catalog conversions < 20 minutes elapsed

These are overall timings and sizes for ENFM against the large DB2 SAP system, and for

SYSDBASE conversion.

Catalog conversions took less than 20 minutes elapsed time.

The team saw some additional extents in SYSDBASE and SYSVIEWS after early

migrations. On subsequent migrations, they increased SYSDBASE and SYSVIEWS as

much as 25% to allow plenty of room for growth and to avoid going into extents. This may

have been more than necessary, but they did not want to have to worry about it.

Page 47: A Triathlon Event

47

47

Page 48: A Triathlon Event

48

48

Migration: New Function Mode

• DSNTIJNF– New function is now possible

– -DIS GROUP DETAIL

• Delete VSAM dataset for DSNDB06.DSNKCX01– index on sysprocedures

• REBIND if you haven’t already

• OA07685 – ISPF Browse support for Unicode

• DSNTIJEN – to return to ENFM, disabling new function

DSNTIJNF will put you into new function mode. New function is now possible. We do not

change DSNHDECP NEWFUN yet.

-DIS GROUP DETAIL for warm-fuzzy.

At this point, we delete our old VSAM dataset for the index on SYSIBM.SYSPROCEDURES.

There is no falling back, so we’ll never use this again.

If you haven’t already, it is recommended to rebind plans and packages to take advantage of

optimizer enhancements for static SQL. Rebind also ensures the conversion of plans and

packages in the new V8 format. Rebinds can be done in CM, but if they were not, they

should be done now. If you’re only rebinding once, rebind in NFM.

OA07685 – ISPF Browse support for Unicode – provides display and find command support.

Now that DBRMs are in Unicode, you can use this to browse them in EBCDIC or whatever

your terminal CCSID is. This is documented in z/OS V1R7.0 ISPF User's Guide Volume II,

section 1.3.3.2 Browse Primary Commands -- the sections on DISPLAY and FIND:

http://www.ibm.com/servers/eserver/zseries/zos/bkserv/r7pdf/ispf.html

Page 49: A Triathlon Event

49

49

Migration: New Function Mode

• DBD conversion: DBDs have a new format in V8

• When DB2 Reads a DBD written prior to V8 – CCSIDs are checked and DBD is marked

– Transformed to V8 format

• When DB2 Writes a DBD in CM – Transformed to V7 format

• In CM for length of time?– may end up with DBDs in V7 format with CCSIDs corrected

– Performance measurement ‘noise’, but…not free– No way to query since info is in directory

– it’s those accessed in CM – probably the databases you use the most

– Perform some small change to convert

Thanks to Jay Yothers, IBM SVL, DB2 for z/OS Development, for the following explanation:

* V8 requires that the CCSID information in the DBD be accurate. CCSID information in DBDs written prior to V8 is sometimes OK, sometimes not ...but in the end, we can't rely on it, so when we load a DBD (in any mode) that was written prior to V8, we ensure the CCSID information is correct and write it back out. When we correct the CCSID info, we mark the DBD so that we don't do it again, since it isn't free. If such a DBD is subsequently altered and written out by V7, due to fallback or coexistence, that mark will disappear and we'll correct the CCSID info and write the DBD out again the next time it is loaded by V8.

* When we read a DBD written prior to V8, we transform it to conform to V8 format.

* When we write a DBD in CM, we transform it into V7 format.

When you take those things together, you'll see that we could end up with a DBD in V7 format with the CCSIDs corrected that would stay that way,without some form of alter to it in ENFM or NFM. Doing a -DISPLAY DATABASE(*) in ENFM or NFM will cause all the DBDs not yet loaded in V8 to have

their CCSID info corrected and written out in V8 format. But, these DBDs are probably less interesting than those used in CM, which would already have been written out in V7 format. Since this info is in the directory, there is no query you can do to find out which is which.

The cost of transforming a DBD from V7 format to V8 format is in the realm of performance measurement noise. This means you can't notice it in the real world, but it isn't free. So, I would suggest making some innocuous alter to the DBDs you depend on in NFM at your leisure. You know which ones they are because you know their names without having to look them up. Just

alter the primary space of one of the table spaces or indexes. Better yet, alter them to use our sliding secondary so that you don't have to worry about primary or secondary any more."

Page 50: A Triathlon Event

50

50

Page 51: A Triathlon Event

51

51

The “Fourth” Mode:

Committed to NFM

• DSNHDECP NEWFUN=YES

• Plan Table Changes

• Review / Change ZPARMs

• DSNTIJMC

• DSNTEJ1L – DSNTEP2

– DSNTEP4

• DSNTEJ2A– DSNTIAUL

Once all systems are running in NFM and stable, and we are committed to New Function

Mode, we will change our system default to NEWFUN=YES in DSNHDECP.

Execute JCL to modify PLAN_TABLEs. This JCL will take advantage of the new online

schema change capability.

We take this opportunity to review and change some ZPARM(s). We’ll change DSVCI =

YES, MGEXTSZ=YES, review DSMAX limits by system and adjust, if necessary.

DSNTIJMC to convert metadata stored procedures to V8 and bind

DSNTEJ1L for the NFM versions of DSNTEP2 and DSNTEP4

DSNTEJ2A for the NFM version of DSNTIAUL

Page 52: A Triathlon Event

52

52

The “Fourth” Mode:

Committed to NFM

• Alter BP0 PGFIX YES

• DSNTIJNR to convert DSNRLST

• DSNJCNVB to convert BSDS

• User-defined IXs on Catalog

– ALTER NOT PADDED

– REBUILD

• Add DSNTIAP to SPUFI and DCLGEN

• DSNTIJLR for SYSPROC.DSNLEUSR

– Supports USERNAMES encryption

ALTER BP0 to PGFIX (‘page fix’) pages in memory.

We convert resource limit facility and BSDS.

We alter the user-defined indexes on the catalog to NOT PADDED and rebuild them (if we

haven’t already, based on their “swelling up” during ENFM conversion).

The DSNTIAP package is added to the SPUFI and DCLGEN plans.

Run job DSNTIJLR to create and bind packages for the new stored procedure

SYSPROC.DSNLEUSR. This stored procedure leverages z/OS Cryptographic Services for

encryption and decryption of authorization ID and password in USERNAMES table.

Page 53: A Triathlon Event

53

53

DB2 “Classic” V8 Migration Status

• All systems to CM

• Data Warehouse/Marts directly to NFM

• OLTP directly to NFM with BMC upgrade

• DB2 “Classic” V8 Migrations

– CM: January 2005 - July 2006

– ENFM / NFM: July 2006 – March 2007

– 3 systems V8 CM

– 0 systems ENFM

– 18 systems V8 NFM

• Performance Observations

The DB2 V8 Migration path for “Classic” systems is to migrate to Compatibility Mode on all systems. Once all systems are migrated to CM and things are stable, we begin the next phase.

Several systems are used for Data Warehousing and Data Marts, so were migrated directly to NFM. They are fairly isolated and limited-use without a lot of cross-system connectivity or traditional / common applications.

We originally planned to migrate our OLTP systems to ENFM, and then take everything to NFM in a short window. We’d like to prevent developers from leveraging new function before production goes NFM. However, the level of BMC tools we are using (Change Manager 7.4.01F, Catalog Manager 7.4.01, DASD 7.1.00) are problematic in ENFM. We were in process of upgrading BMC tools to 8.2.0 (DASD 8.1) , but they won’t work with V8 CM. To lessen the hardship of tool issues, we decided to go directly to NFM followed immediately by the BMC tool upgrade, one system at a time.

The DB2 “Classic” Migrations at John Deere officially kicked off with our first Development system in January of 2005. (We migrated a few System Programmer test systems back in 2004 as part of testing.) We plan to complete migrations with all systems in New Function Mode by March 2007.

DB2 “Classic” Performance Observations

V7-V8CM: For the OLTP systems, we saw 10-15% CPU increase, with a 500M-1Gig increase in MEMORY needs.

On the Data Warehouse and Data Marts we saw CPU reductions of 25-40% but still saw memory consumptions increase.

V8CM-NFM: We don't have enough data to comment.

Page 54: A Triathlon Event

54

54

DB2 SAP V8 Migration Status

• ENFM Catalog conversions < 20 minutes elapsed

• SAP migration is directly to NFM

• SAP application is certified for DB2 V8 NFM

• DB2 SAP Migrations

– March 2005 – December 2005

– 70 Systems (Data Sharing, too)

– Performance observations

The DB2 V8 Migration path for SAP systems is through CM, ENFM, and directly to New

Function Mode. The SAP application is certified with DB2 V8 NFM. The SAP application is

much more “vanilla” than the code we see in our “Classic” systems. They have not

experienced most of the problems we’ve seen in “Classic”.

The DB2 SAP Migrations at John Deere started in March of 2005 and completed in

December 2005.

The “core” R3 systems were among the last to migrate. We have found the CPU

performance to be within the range of V7. Any CPU difference from V8 is lost in the

variation of the normal work.

For PCC, there is less than 1% difference in CPU per first shift dialog step. (A dialog step

can be thought of as an online transaction, like an IMS transaction; usually of some

reasonable size.) This was repeatable, limited variation in size, and good volume. We

focused measurements on first shift because volumes are a lot lower off shift, and we did

not want outliers to skew the numbers. The alternative to measuring SAP dialog steps

would be SAP Batch work processes, which can vary a lot more in resource consumption.

Page 55: A Triathlon Event

55

55

Summary

• Preparation

– Current Maintenance

– Check II13695 V7-V8 Fallback Info APAR

– Good test plan!

– Review the latest online publications

• Communication

– Infrastructure, Business partners, Team

• Migration

– Always have a “Like” Environment

• Dive in!

The focus of every great race is up-front, in preparation and training. Make sure you get current on maintenance for V8 before you start your migration. At time of publication, we are stable on Put 0512 plus some PTFs, HIPERS and a few APARS.

Check the II13695 V7-V8 Fallback Info APAR each time you package maintenance.

Develop a good test plan! The IVP covers a lot, but there may be cool and unusual things done in your shop, that aren’t covered in the IVP. Every environment is unique.

Many publications have been updated since the initial release. Review the latest information in the online publications.

Have a good communication plan among all parties. No one likes surprises!

Always have a “like” environment, for each version / mode you could end up with during the migration process. You may need to test or apply maintenance unique to that environment.

What are you waiting for? Dive in!

Page 56: A Triathlon Event

56

56

References

• Check the Web Publications - updates are frequent!

• DB2 UDB for z/OS V8 Release Planning Guide

• DB2 UDB for z/OS V8 Installation Guide

• Redbook: DB2 UDB for z/OS Version 8: Everything You Ever Wanted to Know, and More

• Redbook: DB2 UDB for z/OS V8: Technical Preview

• Program Directory for IBM DB2 Management Clients Package

• Redbook: DB2 for z/OS and WebSphere: The Perfect Couple

• QMF Compatibility

• ISPF Browse Support for Unicode

• DB2 for V8 Information Roadmap

DB2 UDB for z/OS V8 Release Planning Guide

http://www.ibm.com/software/data/db2/zos/v8books.html

DB2 UDB for z/OS V8 Installation Guide,

http://www.ibm.com/software/data/db2/zos/v8books.html

Redbook: DB2 UDB for z/OS Version 8: Everything You Ever Wanted to Know, and Morehttp://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/sg246079.html

Redbook: DB2 UDB for z/OS Version 8 Technical Preview http://publib-b.boulder.ibm.com/Redbooks.nsf/9445fa5b416f6e32852569ae006bb65f/fd295ccb05a9a8f485256bff00112194?OpenDocument&Highlight=0,sg24-6871

Program Directory for IBM DB2 Management Clients Package, GI10-8567

IBM DB2 V8 Redbooks and Redpapershttp://publib-b.boulder.ibm.com/redbooks.nsf/portals/Data

IBM DB2 V8 Books: http://www.ibm.com/software/data/db2/zos/v8books.html

Redbook: DB2 for z/OS and Websphere: The Perfect Couple

http://www.redbooks.ibm.com/abstracts/sg246319.html?Open

QMF Compatibility: http://www.ibm.com/support/docview.wss?rs=89&context=SS9UMF&dc=DB520&q1=db2+qmf+v8&q2=z%2fos&uid=swg21201944&loc=en_US&cs=utf-8&lang=en

QMF Support: http://www.ibm.com/software/data/qmf/support.html

ISPF Browse Support for Unicode:

V1R7.0 ISPF User's Guide Volume II, http://www.ibm.com/servers/eserver/zseries/zos/bkserv/r7pdf/ispf.html

DB2 V8 Information Roadmap: http://www.ibm.com/software/data/db2/zos/roadmap.html

Page 57: A Triathlon Event

57

57

Joan Keemle

John Deere

[email protected]

DB2 UDB for z/OS V8 Migration: A Triathlon Event