18
1 Orchestrate 7.5 Release Notes Generating Performance Data 2 Specifying Performance Data 2 The Performance Data XML File 3 Performance Data Records 4 When Records are Output 4 Orchestrate-Defined Phases 4 User-Defined Sub-Phases 5 Internal Processing Messages 6 Record Format 6 Phase Timing 7 Sample Log Output 7 Performing Conditional Lookups with the transform Operator 9 Lookup Tables 10 Lookup Table Functions 10 Accessing the Value of a Field in the Lookup Schema 11 Identifying Lookup Tables by Name 11 Specifying Collation Strength 12 New Options for the import Operator 12 funnel Operator Record Processing 13 New Function for the transform Operator 13 Using the import Operator null_field Property 14 Specifying the Number of Records for Import 14 New -statistics Option for db2load 14 Supported Operating Systems and Applications 15

Orchestrate 7.5 Release Notes - pravinshetty.com

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Orchestrate 7.5 Release Notes - pravinshetty.com

Orchestrate 7.5 Release Notes

Generating Performance Data 2Specifying Performance Data 2The Performance Data XML File 3Performance Data Records 4

When Records are Output 4Orchestrate-Defined Phases 4User-Defined Sub-Phases 5Internal Processing Messages 6Record Format 6Phase Timing 7

Sample Log Output 7

Performing Conditional Lookups with the transform Operator 9Lookup Tables 10Lookup Table Functions 10Accessing the Value of a Field in the Lookup Schema 11Identifying Lookup Tables by Name 11

Specifying Collation Strength 12

New Options for the import Operator 12

funnel Operator Record Processing 13

New Function for the transform Operator 13

Using the import Operator null_field Property 14

Specifying the Number of Records for Import 14

New -statistics Option for db2load 14

Supported Operating Systems and Applications 15

1

Page 2: Orchestrate 7.5 Release Notes - pravinshetty.com

Generating Performance Data

Generating Performance Data You can direct Orchestrate to generate data on the performance of your jobs and to print that data to an XML file so you can analyze or visualize it. You can use the information to get a better understanding of component workload.

Performance data is generated for all the operators in your job flow including composite operators and operators inserted by Orchestrate. A performance data record data is output at the start and end of the phases and sub-phases of operator execution. Orchestrate also generates performance data for player timing, memory usage, and buffer performance.

Phases are set by Orchestrate. They are initializeFromArgs(), describeOperator(), doInitialProcessing(), RunLocally(), doFinalProcessing(), and other Orchestrate functions listed in Orchestrate-Defined Phases on page 4.

Sub-phases are set up by Orchestrate users using the Orchestrate C++ API. The API functions are described in User-Defined Sub-Phases on page 5.

When job optimization is important, performance data should not be generated because it impacts execution time.

Specifying Performance DataThere are two ways to output performance data:

• Specify an output directory to the -pdd osh option. The syntax is:osh -pdd directory

For example:osh -pdd /home/usr/job_performance

The command-line setting has precedence over the environment variable.

• Set the APT_PERFORMANCE_DATA environment variable. You can set it to a directory path or set it without a value to indicate that the data should be placed in the current working directory.

The output file is named:performance.$JOBID

where JOBID is the number expected by the Job Monitor, either the process identifier or the job identifier of osh.

The directory locations for the performance data should be distinct from any I/O devices that may cause bottlenecks during job execution. For optimal efficiency, use a completely separate local disk for performance output.

2 Orchestrate 7.5 Release Notes

Page 3: Orchestrate 7.5 Release Notes - pravinshetty.com

Generating Performance Data

osh command-line examples:

• osh –f foo.osh –pdd /tmp >& output.out & [2] 12345 > ls /tmp/performance.12345

• osh –f foo.osh –jobid “myID” –pdd /tmp >& output.out > ls /tmp/performance.myID >setenv APT_PERFORMANCE_DATA

• osh -f foo.osh >& output.out & {3} 12346 > ls ./performance.12346

/home/example/performance.12346 > setenv APT_PERFORMANCE_DATA /tmp

• osh –f foo.osh >& output.out & [3] 12346 > ls ./performance.12346

/home/example/performance.12346 > setenv APT_PERFORMANCE_DATA /tmp

• osh –f foo.osh >& output.out &[4] 12347 > ls /tmp/performance.12347 /tmp/performance.12347

• osh –f foo.osh –pdd /home/jobs/performance_output >& output.out & [5] 12348 > ls /home/jobs/performance_output/performance.12348

The Performance Data XML FileHere is the format of the ouput XML file. The performance data records are included towards the end of the file. They are described in the next section, Performance Data Records on page 4.

<?xml version="1.0" encoding="ISO-8859-1" ?> <performance_output version=”1.0” date=”20031101 ll:31:15” framework_revision=”7.5” job_ident=”myJob”> <layout delimiter=”,“> <field name=”TIME”/>

<field name=”PARTITION_NUMBER”/> <field name="PROCESS_NUMBER"/> <field name="OPERATOR_NUMBER"/> <field name="IDENT"/> <field name="JOBMON_IDENT"/>

<field name="PHASE"/> <field name="SUBPHASE"/> <field name="ELAPSED_TIME"/> <field name="CPU_TIME"/> <field name="SYSTEM_TIME"/> <field name="HEAP"/> <field name="RECORDS"/> <field name="STATUS"/> </layout> <run_data> <![CDATA[ # Performance data records are here. ]]> <run_data> <performance_output>

3 Orchestrate 7.5 Release Notes

Page 4: Orchestrate 7.5 Release Notes - pravinshetty.com

Generating Performance Data

Performance Data Records

When Records are OutputA performance data record is output at the start and end of each phase and sub-phase as follows:

• The beginning phase of operator execution [START]

– The beginning of a sub-phase of operator execution [START]

• A Job Monitor interval; for example, when a time-based or size-based trigger occurs [In_Progress]

• The end of a phase of operator execution when a status is returned [APT_StatusOk, APT_StatusFailed]

– The end of a subphase of operator execution

Orchestrate-Defined PhasesPerformance records are output for the following operator and combined-operator phases set by Orchestrate. For composite operators, all phases occur in terms of the operators that make up the composite.

Operator Phases

• initializeFromArgs_()

• describeOperator()

• doInitialProcessing()

• runLocally()

• doFinalProcessing()

• postFinalRunLocally()

Combined Operator Phases

• initializeFromArgs_()

• describeOperator()

• doInitialProcessing()

• waitingForInput()

This function is not in combined-operator functionality because no records have arrived or are ready for output.

• processInputRecord()

Starts at the first ready input record and ends once the end of the data is received for all inputs.

4 Orchestrate 7.5 Release Notes

Page 5: Orchestrate 7.5 Release Notes - pravinshetty.com

Generating Performance Data

• writeOutputRecord()

The writing of output records occurs during the process of inputting records as well, but this phase signifies that no more input records will be coming in on any input, or that outputting is occurring without the need of an input record, and records are being written to output datasets.

• doFinalProcessing()

• postFinalRunLocally()

User-Defined Sub-PhasesYour set up sub-phases using the public member functions of APT_Operator:

• void APT_Operator::markPerformanceSubPhase(APT_UString subPhase)

Use this function to generate a start sub-phase performance record at the time of its call.

• void APT_Operator::endPerformanceSubPhase()

Use this function to generate an end sub-phase performance record at the time of its call. The record includes the APT_Status value in the status field.

• bool APT_Operator::inSubPhase()

When your job is not set up for performance data generation, the phase-change calls are nothing more than a check to see if performance data generation is specified. When performance data is generated, the phase information is stored and a Start status performance data message is issued.

The following is an example of this functionality for an operator that buffers records and then outputs data: • markPerformanceSubPhase(“Loading Records”);

{ while(getRecord()) { doSomethingWithRecord(); } endPerformanceSubPhase(); ... markPerformanceSubPhase(“Outputting Results”); while(...) { putRecord();

Note An operator can only be in one phase at a time and in only one subphase of a given phase at a time. You should always end the current subphase before marking the beginning of a new one; however, if you mark a new subphase without ending the previous subphase, Orchestrate automatically marks the ending.

5 Orchestrate 7.5 Release Notes

Page 6: Orchestrate 7.5 Release Notes - pravinshetty.com

Generating Performance Data

} endPerformanceSubPhase();

Internal Processing MessagesThese messages only occur during Job Monitor intervals and signify that execution was in between phases when the event occurred. This event will rarely occur, but can occur with combined operators, since the operator can be in a state where, although it is running with active datasets, it has yet to receive or output any records. This state may also occur for any operator that has completed its runLocally segment and is waiting for other operators to finish.

Record FormatPerformance data records are delimited by commas and have this format:

TIME PART# PROC# OP# ID JID PHASE SPHASE

ELPST CPUT SYST SYST HEAP RECS STATUS

where:

• TIME is a timestamp of when the record was generated. This field is applicable to all messages.

• PART# is the partition from which the message was issued. This is applicable to all messages.

• PROC# is the process number. It is derived from the order in which processes are created for a partition. The section leader is the 0th process, and players are numbered from 1 through n.

• OP# is the operator number in the step that produced the data. The operator number differs from the process number because, with combined operators, there can be multiple operators in a process.

• ID is the operator internal identifier.

• JID is the Job Monitor ident for the operator.

• PHASE is the phase of execution the operator is in. This may be initializeFromArgs, describeOperator, doInitialProcessing, RunLocally, doFinalProcessing, and other functions.

• SUBPHASE is a special field that describes a logical subsection in the code. Orchestrate users set up sub-phases using the Orchestrate C++ API which is described in User-Defined Sub-Phases on page 5.

• ELPST, CPUT, and SYST capture timing information in seconds for the component. This information is only applicable to completion and progress messages. See STATUS below. Note that CPUT is time in CPU seconds and not the percent of CPU time. See PHASE execution above.

6 Orchestrate 7.5 Release Notes

Page 7: Orchestrate 7.5 Release Notes - pravinshetty.com

Generating Performance Data

• HEAP is the heap size growth. This is only applicable to completion and progress messages. See STATUS below.

• RECS is the number of records processed to date. This field is a subrec of two integer vectors, one for input counts and one for output counts. Each record has this format:

[n]inrecs(0)|...|inrecs(n)|[m]outrecs(0)|...|outrecs(m)|

• STATUS is the entry status for the current phase of this operator. Status may have any of the following values: Start, End (APT_StatusOk), End (APT_StatusFailed), or In_Progress.

Phase TimingComputing execution timing can be done two ways:

• Computing literal job times by subtracting timestamps after job execution

• Summing the times returned in seconds upon the completion of each phase of execution. This is more precise than computing literal job times.

The times in the fields ELPST, CPUT, and SYST described in “Record Format” above represent the elapsed real time for the current phase, the time in terms of CPU cycles used, and the time in terms of system usage, respectively. Each subphase and Job Monitor timing event gives the time elapsed since the start of the current phase of the job.

In order to extract the overhead of performance data as best as possible, each event follows this flow of execution:

APT_PerformanceInfo::myEvent(...) {

stop(); // Stop timing. computeStats(); // Figure out the performance results. setupGenericRecord(); // Fill in the necessary fields for output.

... // Event specific changes logRecord(); resume(); // Resume timing.

}

The pseudo code above shows how the timing in clock ticks is halted during the computing and logging of performance data. Even though jobs run slower when performance data is generated, the data is close to what performance would be like for a typical job run, provided there are no additional resource contentions.

Sample Log OutputThere are 4 nodes. The APT_PERFORMANCE_DATA environmental variable is set. The osh command is:

osh –f OshScript.osh –jobid foo

7 Orchestrate 7.5 Release Notes

Page 8: Orchestrate 7.5 Release Notes - pravinshetty.com

Generating Performance Data

The osh script is:“generator –schema record(a:int32) –records 1000000 [jobmon_ident(gen)] | tsort –key a –desc [jobmon_ident(sort)] | peek [jobmon_ident(peek)]”

Abbreviated output:

<?xml version="1.0" encoding="ISO-8859-1" ?> <performance_output version="1.0" date="20040202 14:46:24" framework_revision="7.0.0" job_ident="foo"> <layout delimiter=","> <field name="TIME"/> <field name="PARTITION_NUMBER"/> <field name="PROCESS_NUMBER"/> <field name="OPERATOR_NUMBER"/> <field name="IDENT"/> <field name="JOBMON_IDENT"/> <field name="PHASE"/> <field name="SUBPHASE"/> <field name="ELAPSED_TIME"/> <field name="CPU_TIME"/> <field name="SYSTEM_TIME"/> <field name="HEAP"/> <field name="RECORDS"/> <field name="STATUS"/> </layout> <run_data> <![CDATA[ 20040202 14:46:24,0,0,0,APT_LicenseOperator,,describeOperator,,0,0,0,0,[0][0], Start 20040202 14:46:24,0,0,0,APT_LicenseOperator,,describeOperator,,0,0,0,0,[0][0], End (APT_StatusOk) 14:46:25,0,0,0,generator,gen,describeOperator,,0,0,0,4294959104,[0][0],Start 20040202 14:46:25,0,0,0,generator,gen,describeOperator,,0,0,0,4294959104,[0][0],End(APT_StatusO) 20040202 14:46:25,0,0,1,tsort,sort,describeOperator,,0,0,0,0,[0][0],Start 20040202 14:46:25,0,0,1,tsort,sort,describeOperator,,0,0,0,0,[0][0],End(APT_StatusOk) 20040202 14:46:25,0,0,2,peek,peek,describeOperator,,0,0,0,0,[0][0],Start 20040202 14:46:25,0,0,2,peek,peek,describeOperator,,0,0,0,0,[0][0],End(APT_StatusOk) 20040202 14:46:25,0,0,3,APT_CombinedOperatorController,,describeOperator,,0,0,0,0,[0][0],Start 20040202

. . .

</run_data> L</performance_output>

8 Orchestrate 7.5 Release Notes

Page 9: Orchestrate 7.5 Release Notes - pravinshetty.com

Performing Conditional Lookups with the transform

Performing Conditional Lookups with the transform Operator

To support conditional lookups, the transform operator now accepts multiple input lookup tables in addition to the source input dataset.

Input 0 is the input dataset, and any additional datasets must match a -table option or a -fileset option.

The -table option has this syntax:-table –key field [ci | cs] [–key field [ci | cs] ...] [-allow_dups] [-save fileset_descriptor] [-diskpool pool] [-tableschema schema | -tableschemafile schema_file] [-tableschema schema | -tableschemafile schema_file ...]

The -fileset option specifies a fileset with a lookup table. It has this syntax:

-fileset fileset_description

The -key, -ci, -allow_dups, -save, and -diskpool options function just as they do in the lookup operator:

• The -key option specifies the name of a lookup key field. This option must be repeated if there are multiple key fields. You must specify at least one key for each table. You cannot use a vector, subrecord, or tagged aggregate field as a lookup key.

• The -ci suboption specifies that the string comparison of lookup key values is to be case insensitive; the -cs option specifies case-sensitive comparison, which is the default.

• The -allow_dups option causes the operator to save multiple copies of duplicate records in the lookup table without issuing a warning. Two lookup records are

transform

input.ds fileset1 ... file-setn table0.ds ... tablen.ds

output file-sets (when the save suboption is used)

output data sets

...... ...reject data sets

9 Orchestrate 7.5 Release Notes

Page 10: Orchestrate 7.5 Release Notes - pravinshetty.com

Performing Conditional Lookups with the transform Operator

duplicates when all lookup key fields have the same value in the two records. If you do not specify this option, Orchestrate issues a warning message when it encounters duplicate records and discards all but the first of the matching records.

In normal lookup mode, only one lookup table (specified by either -table or -fileset) can have been created with -allow_dups set.

• The -save option lets you specify the name of a fileset to write this lookup table to; if -save is omitted, tables are written as scratch files and deleted at the end of the lookup. In create-only mode, -save is, of course, required.

• The -diskpool option lets you specify a disk pool in which to create lookup tables. By default, the operator looks first for a "lookup" disk pool, then uses the default pool (""). Use this option to specify a different disk pool to use.

In addition, the -tableschema and -tableschemafile suboptions have been added. They are used to specify the schema of the dataset. They are mutually exclusive and one is required if the –compile option is set, but are not required for -compileAndRun or –run.

Here is an example osh command-line:osh -transform -expressionfile trx1 -table -key a -ci -fileset sKeyTable.fs < dataset.ds < table.ds > target.ds

Lookup TablesA new lookup table object is available from the transform code. Each lookup table will be named using the tablename keyword. This name corresponds to a lookup table object of the same name, which is used in the transform code.

Lookup Table FunctionsThe lookup table object is the parameter in the following methods:

• lookup(lookup_table) Performs a lookup on the table using the current input record. It fills the current record of the lookup table with the first record found. If a match is not found, the current record is empty. If lookup() is called multiple times on the same record, the record object will be filled with the current match if there is one. A new lookup will not be done. There is no return value.

• next_match(lookup_table) Gets the next record matched in the lookup and puts it into the current record of the table. There is no return value.

• clear_lookup(lookup_table) This method fills the lookup table record with an empty record. The current match can be restored by calling lookup() again. It has no return value.

10 Orchestrate 7.5 Release Notes

Page 11: Orchestrate 7.5 Release Notes - pravinshetty.com

Performing Conditional Lookups with the transform

• is_match(lookup_table) Checks to see if the current lookup record has a match. If this method returns false directly after the lookup() call, no matches were found in the table. Returns a boolean value specifying whether the record is empty or not.

Accessing the Value of a Field in the Lookup SchemaThe name of any field in the lookup schema, other than key fields, can be used to access the field value, such as table1.field1. If a field is accessed when is_match() returns false, the value of the field is null if it is nullable or it has its default value.

Identifying Lookup Tables by NameUse the new tablename keyword to identify lookup tables by name in the transform code in a similar manner that inputname and outputname identify input and output datasets. Since a lookup table can be from an input to the operator or from a fileset, the order of parameters in the command line is be used to determine the number associated with the table.

Here is an example of lookup table usage:transform -expressionfile trx1 –table –key a -fileset sKeyTable.fs < dataset.v < table.v > target.v

trx1:

inputname 0 in1; outputname 0 out0; tablename 0 tbl1; tablename 1 sKeyTable;

mainloop { // This code demonstrates the interface without doing anything really // useful int nullCount; nullCount = 0;

lookup(sKeyTable); if (is_match(sKeyTable)) // if there's no match { lookup(tbl1); if (!is_match(tbl1)) { out0.field2 = “missing”; } } else { // Loop through the results while (is_match(sKeyTable)) { if (is_null(sKeyTable.field1)) {

11 Orchestrate 7.5 Release Notes

Page 12: Orchestrate 7.5 Release Notes - pravinshetty.com

Specifying Collation Strength

nullCount++; } next_match(sKeyTable); } } writerecord 0; }

Specifying Collation StrengthThe collation strengths of strings determines the collation sequence Orchestrate uses for sorting and searching. You specify collation strength using the -collation_strength top-level osh option.

Its syntax is:

-collation_strength identical | tertiary | secondary | primary

Orchestrate uses International Components for Unicode (ICU) libraries to support collation functionality. Information on the effect of modifying collation strengths can be found in the ICU documentation available at this site:

http://oss.software.ibm.com/icu/userguide/Collate_Intro.html

In brief:

• identical indicates there is no strength difference between two characters.

• tertiary identifies a difference between an upper and a lowercase character; for example, a and A.

• secondary indicates a difference between an accented and an unaccented character; for example a and ä.

• primary identifies a base-letter difference between between two characters; for example, a and b.

New Options for the import OperatorThree options have to added to the import operator:

• -multinode[ -multinode yes | no ]

A yes value specifies the input file is to be read in sections from multiple nodes; a no value, the default, specifies the entire file is to be read by a single node.

• -first[-first n]

12 Orchestrate 7.5 Release Notes

Page 13: Orchestrate 7.5 Release Notes - pravinshetty.com

funnel Operator Record Processing

This option imports the first n records of a file. It does not apply to multiple nodes, filesets, or file patterns. It does apply to multiple files and to the -source and -sourcefile options when file patterns are specified.

Here are some osh command examples:osh "import -file file1 -file file2 -first 10 >| outputfile"

osh "import -source 'cat file1' -first 10 >| outputfile"

osh "import -sourcelist sourcefile -first 10 >| outputfile"

osh "import -file file1 -first 5 >| outputfile"

• -firstLineColumnNames[-firstLineColumnNames]

Specifies that the first line of a source file should not be imported.

• -sourceNameField[-sourceNameField sourceStringFieldName]

Adds a field named sourceStringFieldName with the import source string as its vaue.

• -recordNumberField [-recordNumberField recordNumberFieldName]

Adds a field with field name recordNumberFieldName with the record number as its value.

funnel Operator Record ProcessingThe funnel operator processes its inputs using non-deterministic selection based on record availability. This is now the default behavior. The operator examines its input data sets in round-robin order. If the current record in a data set is ready for processing, the operator processes it. However, if the current record in a data set is not ready for processing, the operator does not halt execution. Instead, it moves on to the next data set and examines its current record for availability. This process continues until all the records have been transferred to output.

New Function for the transform OperatorThe soundex function returns a string which represents the phonetic code for an input word. Input words that produce the same code are considered phonetically equivalent. It’s syntax is:APT_String soundex(const APT_String& inputWord, APT_Int8 lengthOption = 4, const APT_Int8& censusOption = 0);

13 Orchestrate 7.5 Release Notes

Page 14: Orchestrate 7.5 Release Notes - pravinshetty.com

Using the import Operator null_field Property

Using the import Operator null_field PropertyWhen you are using the null_field property at the record level, it is important to also specify the field-level null_field property to any nullable fields not covered by the record-level property.

For example:record { null_field = 'aaaa' } ( field1:nullable int8 { null_field = '-127' }; field2:nullable string[4]; field3:nullable string; field4:nullable ustring[8] {null_field = 'ââââââââ' }; )

The record-level property above applies only to variable-length strings and fixed-length strings of four characters, field2 and field3; field1 and field4 are given field-level null_field properties because they are not covered by the record property.

Specifying the Number of Records for ImportYou can use the -first [-first n]

This option imports the first n records of a file.

This option does not work with multiple nodes, filesets, or file patterns. It does work with multiple files and with the -source and -sourcefile options when file patterns are specified.

Here are some osh command examples:osh "import -file file1 -file file2 -first 10 >| outputfile"

osh "import -source 'cat file1' -first 10 >| outputfile"

osh "import -sourcelist sourcefile -first 10 >| outputfile"

osh "import -file file1 -first 5 >| outputfile"

New -statistics Option for db2loadThe -statistics option allows you to specify which statistics are generated upon load completion. Its syntax is:

-statisics stats_none | stats_exttable_only | stats_extindex_only | stats_index | Sstats_table | stats_extindex_table | stats_all | stats_both

The default value is stats_none.

14 Orchestrate 7.5 Release Notes

Page 15: Orchestrate 7.5 Release Notes - pravinshetty.com

Supported Operating Systems and Applications

Supported Operating Systems and ApplicationsOperating Systems

C++ Compiler DBMS SAS

AIX 4.3.3, 5.1, and 5.2

VisualAge C++ 5.0.2.0 and 6.0

IBM DB2 UDB 7.2 EEE and DB2 UDB ESE v8.1 with DPF

INFORMIX XPS 8.3 and 8.4

Oracle 8i EE R3 (8.1.7) for AIX 4.3.3 and 5.1 only; Oracle 9i (9.2) for AIX 4.33, 5.1, and 5.2

Teradata v2r4.1 and v2r5 Teradata Utilities Foundation (TUF) 6.1.0 and TTU 7.0

SAS 6.12 and 8.2

Compaq Tru64 5.1

Compaq C++ 6.2, 6.3, and 6.5

Oracle EE R3 (8.1.7) and Oracle 9i (9.2)

SAS 6.12 and 8.2

Linux Redhat Red Hat Enterprise Linux AS 2.1

gcc/g++ 2.96 Oracle 8i EE R3 (8.1.7) and Oracle 9i (9.2)

DB2 UDB ESE v8.1 with DPF

SAS 8.2

HP-UX 11 and 11.11

HP ANSI C++ A3.50 Note: This compiler requires patch PHSS_ 29483 from HP. It is available at: http://itrc.hp.com. It changes the compiler version from 3.50 to 3.52.

IBM DB2 UDB 7.2 EEE and DB2 UDB ESE v8.1 with DPF

INFORMIX XPS 8.3 and 8.4

Oracle 8i EE R3 (8.1.7) for HP-UX 11.1 only; Oracle 9i (9.2) for 11 and 11.11

Teradata v2r4.1 and v2r5 Teradata Utilities Foundation (TUF) 6.1.1 and TTU 7.0

SAS 6.12 and 8.2

Sun Solaris for Sparc 2.7, 2.8, and 2.9.

Sun Pro C++ 6.0/ Forte

Sun One Studio 7

IBM DB2 UDB 7.2 EEE and DB2 UDB ESE v8.1.2003 with DPF

INFORMIX XPS 8.3 and 8.4

Oracle EE R3 (8.1.7) for Solaris 2.9 only; 9i (9.2) for Solaris 2.7, 2.8, and 2.9

Teradata v2r4.1 and v2r5 Teradata Utilities Foundation (TUF) 6.1.1 and TTU 7.0

SAS 6.12 and 8.2

15 Orchestrate 7.5 Release Notes

Page 16: Orchestrate 7.5 Release Notes - pravinshetty.com

Supported Operating Systems and Applications

© 2004, 1995-2004 Ascential Software Corporation. All rights reserved. Orchestrate, Ascential, Ascential Software, DataStage, MetaStage, MetaBroker, and Axielle are trademarks of Ascential Software Corporation or its affiliates and may be registered in the United States or other jurisdictions. Adobe Acrobat is a trademark of Adobe Systems, Inc. HP and Tru64 is either a registered trademark or trademark of Hewlett-Packard Company. AIX, DB2, DB2 Universal Database, IBM, Informix, MQSeries, Red Brick, UniData, UniVerse, and WebSphere are either registered trademarks or trademarks of IBM Corporation. Microsoft, Windows, Windows NT, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Teradata is a registered trademark of NCR International, Inc. Oracle, Oracle8i, and Oracle 9i are either registered trademarks or trademarks of Oracle Corporation. Solaris, Sun, and Sun Microsystems are either trademarks or service marks of Sun Microsystems, Inc. Adaptive Server, Open Client, and Sybase are either registered trademarks or trademarks of Sybase, Inc. Linux is a trademark of Linus Torvalds. WinZip is a registered trademark of WinZip Company, Inc. UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company, Ltd. Other marks mentioned are the property of the owners of those marks.

Ascential Software Corporation. 50 Washington Street Westboro, MA 01581-1021 508 366-3888 508 389-8955 FAX

For technical support, send e-mail to:

[email protected].

16 Orchestrate 7.5 Release Notes

Page 17: Orchestrate 7.5 Release Notes - pravinshetty.com

Supported Operating Systems and Applications

17 Orchestrate 7.5 Release Notes

Page 18: Orchestrate 7.5 Release Notes - pravinshetty.com

Supported Operating Systems and Applications

18 Orchestrate 7.5 Release Notes