222
Pervasive Integration Platform Fundamental End User Training

Pervasive Data Integrator

Embed Size (px)

Citation preview

Page 1: Pervasive Data Integrator

Pervasive Integration Platform Fundamental End User Training

Page 2: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 2

c2006 Pervasive Software Inc. All rights reserved. Design by Pervasive.

Pervasive is a registered trademark, and "Integrating the Interconnected World" is a trademark of Pervasive Software Inc. Cosmos, Integration Architect, Process Designer, Map Designer, Structured Schema Designer, Extract Schema Designer, Document Schema Designer, Content Extractor, CXL, Process Designer, Integration Engine, DJIS, Data Junction Integration Suite, Data Junction Integration Engine, XML Junction, HIPAA Junction, and Integration Engineering are trademarks of Pervasive Software Inc..

All names of databases, formats and corporations are trademarks or registered trademarks of their respective companies.

This exercise scenario workbook was written for Pervasive’s Integration Platform software, version 8.14.

Page 3: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 3

Table of Contents :

Forward...........................................................................................................................................6

The Pervasive Integration Platform ...............................................................................................7

Architectural Overview of the Integration Platform .......................................................................8

Design Tools .................................................................................................................................9

MetaData Tools........................................................................................................................... 13

Production Tools......................................................................................................................... 14

Installation Folders ..................................................................................................................... 15

Repository Explorer...................................................................................................................... 17

Define a Workspace and Repository ........................................................................................ 18 Configuring Database Connectivity (ODBC Drivers)............................................................... 19 Splash Screen – Licensing and Version info............................................................................. 20

Map Designer – Fundamentals of Transformation...................................................................... 21

Map Designer – The Foundation ................................................................................................. 22 Interface Familiarization.......................................................................................................... 23 Default Map ............................................................................................................................ 27

Connectors and Connections – Accessing Data ........................................................................... 31 Factory Connections................................................................................................................ 32 User Defined Connections ....................................................................................................... 34 Macro Definitions.................................................................................................................... 35

Automatic Transformation Features ............................................................................................ 37 Source Data Features – Sort..................................................................................................... 38 Source Data Features – Filter ................................................................................................... 42 Target Output Modes - Replace, Append, Clear and Append.................................................... 46 Target Output Modes – Delete................................................................................................. 49 Target Output Modes – Update................................................................................................ 51

The RIFL Script Editor................................................................................................................ 55 RIFL Script - Functions ........................................................................................................... 56 RIFL Script – Flow Control ..................................................................................................... 60

Transformation Map Properties .................................................................................................. 65 Reject Connection Info............................................................................................................ 66

Event Handlers & Actions ........................................................................................................... 69 Understanding Event Handlers................................................................................................. 70 Event Sequence Issues............................................................................................................. 73 Using Action Parameters – Conditional Put ............................................................................. 78 Using “OnDataChange” Events ............................................................................................... 81 Trapping Processing Errors With Events.................................................................................. 85

Comprehensive Review................................................................................................................ 89

Metadata – Using the Schema Designers...................................................................................... 90

Page 4: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 4

Structured Schema Designer........................................................................................................ 91 No Metadata Available (ASCII Fixed)..................................................................................... 92 External Metadata (Cobol Copybook)...................................................................................... 93 Binary Data and Code Pages.................................................................................................... 95 Reuse Metadata (Reusing a Structured Schema)....................................................................... 97 Multiple Record Type Support in Structured Schema Designer .............................................. 100 Conflict Resolution................................................................................................................ 103

Extract Schema Designer .......................................................................................................... 106 Interface Fundamentals & CXL ............................................................................................. 108 Data Collection/Output Options............................................................................................. 112 Extracting Fixed Field Definitions ......................................................................................... 114 Extracting Variable Fixed Field Definitions ........................................................................... 116

EasyLoader.................................................................................................................................. 119

Overview & Introductory Presentation ...................................................................................... 120 Using EasyLoader to create/run maps – using wizard............................................................. 121 Using EasyLoader to create/run maps – without wizard ......................................................... 125 Creating Targets for use with Easy Loader............................................................................. 129

Process Designer for Data Integrator ......................................................................................... 134

Process Designer Fundamentals................................................................................................ 135 Creating a Process ................................................................................................................. 136 Conditional Branching – The Step Result Wizard .................................................................. 142 Parallel vs. Sequential Processing .......................................................................................... 146 FileList - Batch Processing Multiple Files.............................................................................. 148

Integration Engine ...................................................................................................................... 156

Syntax: Version Information.................................................................................................. 157 Options and Switches ............................................................................................................ 158 Execute A Transformation..................................................................................................... 160 Using a “-Macro_File” Option............................................................................................... 161 Command Line Overrides – Source Connection..................................................................... 162 Ease of Use: Options File ...................................................................................................... 163 Executing a Process............................................................................................................... 164 Using the “-Set” Variable Option........................................................................................... 165 Scheduling Executions........................................................................................................... 167

Mapping Techniques................................................................................................................... 168

Multiple Record Type Structures................................................................................................ 169 Multiple Record Type – 1 One-to-Many ................................................................................ 170 Multiple Record Type – 2 Many-to-One ................................................................................ 174

User Defined Functions............................................................................................................. 178 Code Reuse – Save/Open a RIFL script Code Modules .......................................................... 179 Code Reuse - Code Modules.................................................................................................. 180

Lookup Wizards......................................................................................................................... 182 Flat File Lookup .................................................................................................................... 183 Dynamic SQL Lookup........................................................................................................... 185 Incore Table Lookup ............................................................................................................. 188

Page 5: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 5

RDBMS Mapping ...................................................................................................................... 190 Select Statements – SQL Passthrough.................................................................................... 191 Integration Querybuilder........................................................................................................ 193 DJX in Select Statements – Dynamic Row sets ...................................................................... 197 Multimode Introduction......................................................................................................... 200 Multimode – Data Normalization........................................................................................... 205 Mulitmode Implementation with Upsert Action ..................................................................... 212

Management Tools ...................................................................................................................... 217

Upgrade Utility ......................................................................................................................... 218 Upgrading Maps from Prior Versions .................................................................................... 219

Engine Profiler.......................................................................................................................... 221

Data Profiler............................................................................................................................. 222

Page 6: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 6

Forward

This course is designed to be presented in a classroom environment in which each student has access to their own computer that has the Pervasive Integration Products installed as well as this Fundamentals courseware. It could be used as a stand-alone tutorial course if the student is already familiar with the interface of the Pervasive tools.

The Fundamentals course is not meant to be a comprehensive tutorial of all of our products. At the end of this course it is our intention that a student have a basic understanding of Map Designer, Structured Schema Designer, Extract Schema Designer, Process Designer, and the Integration Engine. The student should know how to use and how to expand their own knowledge of these tools.

Further training can be obtained from Pervasive Training Services.

Any path mentioned in this document assumes a default installation of the Pervasive software and the Fundamentals courseware. If the student installs differently, that will have to be taken into account when doing exercises or following links.

We hope that the student enjoys this class and takes away everything needed. We welcome any feedback.

Page 7: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 7

The Pervasive Integration Platform

This section describes the integration stack from the user’s perspective.

Page 8: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 8

Architectural Overview of the Integration Platform

This presentation depicts the architecture of the Integration Platform from the end-user’s perspective. It briefly discusses all of the Integration tools and how they work together.

Integration_General Overview.ppt

Page 9: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 9

Design Tools

Here we discuss the tools that we use to create maps (transformations), schemas, profiles and processes.

Data Profiler

Data Profiler is a data quality analysis tool. It analyzes data sets accurately and efficiently, and generates detailed reports of the quality of incoming data.

The user defines metrics to which each record in the input file is then compared. These metrics include compliance testing, conversion testing, and statistics collection. There are predefined metrics to streamline metrics definition.

Data Profiler can generate “clean” data files containing records that fit all of the data analysis criteria and “dirty” data files containing records that fail any of the data analysis criteria.

The reports and the “clean” and “dirty” data files can be used as part of an overall business flow that prepares incoming data for processing.

This document does not have exercises or courseware on Data Profiler, though there is a one-day course available from Pervasive Training Services.

Structured Schema Designer

The Structured Schema Designer provides a visual user interface for designing structural data files. The resulting metadata is stored as Structured Schema files with an .ss.xml extension. The .ss.xml files include schema, record recognition rule and record validation rule information.

In the Structured Schema Designer, you can create or modify schemas that can be accessed in the Map Designer to provide structure for Source or Target files.

You can use the Data Parser to manually parse flat Binary, fixed-length ASCII, or record manager files. The Data Parser defines Source record length, defines Source field sizes and data types, defines Source data properties, assigns Source field names, defines Schemas with multiple record types.

You can also use the Structured Schema Designer to import schemas from outside sources such as cobol copybooks, XML DTD’s, or Oracle DDL’s.

The ss.xml files that are created by Structured Schema Designer are used as input in Map Designer as part of a source or target connection.

There are courseware and exercises on the Structured Schema Designer in this document.

Extract Schema Designer

The Extract Schema Designer is a parser tool that allows the user to visually select fields and records from text files that are of an irregular format. Some examples are:

• Printouts from programs captured as disk files • Reports of any size or dimension

Page 10: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 10

• ASCII or any type of EBCDIC text files • Spooled print files • Fixed length sequential files • Complex multi-line files • Downloaded text files (e.g., news retrieval, financial, real estate...) • HTML and other structured documents • Internet text downloads • E-mail header and body • On-line textual databases • CD-ROM textbases • Files with tagged data fields

Extract Schema Designer creates schemas that are stored as CXL files. These files are then used as input in Map Designer as part of a source connection.

There are courseware and exercises on the Extract Schema Designer in this document.

Document Schema Designer

Document Schema Designer is a Java-based tool that allows you to build templates for E-document files. You can custom-build schema subsets for specific EDI Trading Partner and TranType scenarios. In addition, the Document Schema Designer is also very useful to those working with HL7, HIPAA, SAP (IDoc), SWIFT and FIX data files.

You can develop schema files for all e-documents that are compatible with Map Designer. The document schemas serve several useful purposes:

File Structure

Metadata Support

Parsing Capabilities

Validation Support

The Document Schema Designer produces DS.XML document schema files that can be used as input in Map Designer as part of a source or target connection.

In an easy-to-use GUI interface, the user selects desired segments from the "template" document schemas that are generated from the controlling standards documentation. The segments are saved in a schema file that can be edited. The user may also add segments from a "master" segment library, add loops/segments/composites/elements by hand, add discrimination rules for distinguishing loops/segments of the same type at the same level, and use code tables for data validation.

The user can copy, paste and delete any part of the structure, including the segments, elements, composites loops, and fields (and their subordinate loops/segments/subcomponents).

The Document Schema Designer produces DS.XML document schema files that can be used as input in Map Designer as part of a source or target connection.

These files can also be used in a Process as part of a Validation step.

This document does not have exercises or courseware on Document Schema Designer, though there is a one-day course available from Pervasive Training Services.

Page 11: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 11

Join Designer

Join Designer is an application that allows the user to join two or more single-record type data sources prior to running a Map Designer Transformation on them. These sources do not have to be of the same type. For example, an SQL database table could be joined with a simple ASCII text file.

The user first uses Source View Designer to create Source View Files that hold metadata about the Sources. From these a Join View File is created, which contains the metadata needed by Map Designer to treat the Source files as if they were a single Source. The user then supplies this Join View File to Map Designer using "Join Engine" as the connection type. The original Source files and the Source View Files must still be available in the locations specified in the Join View File.

When a join is saved, a Join View File (.join.xml ) is created. This can be supplied to Map Designer as a Source file or used to create further joins.

While a join is limited to two Source files, you can use another join as a Source, thus building up nested joins to any level of complexity.

This document does not have exercises or courseware on Join Designer, though there is an exercise in the Advanced course available from Pervasive Training Services.

Map Designer

Map Designer is the heart of the integration product tool set. It transfers data among a wide variety of data file types. In Map Designer, to transfer data, the user designs and runs what is called a “Transformation” or a “Map” ( the two words are synonymous). Each Transformation created contains all the information Map Designer needs to transform data from an existing data file or table to a new Target data file or table, including any modifications that may be necessary.

Map Designer solves complex Transformation problems by allowing the user to:

• transform data between applications • combine data from external Sources • change data types • add, delete, rearrange, split or concatenate fields • parse and select substrings; pad or truncate data fields • clean address fields and execute unlimited string and numerical manipulations • control log errors and events • define external table lookups

Map Designer creates two files (tf.xml and map.xml) that contain all the information necessary to run a transformation. A transformation can be run from Map Designer, Process Designer or the Integration Engine.

Map Designer is covered extensively in this course and is also explored in the Advanced course and the EDI/HIPAA course.

Easy Loader

Easy Loader is a one-to-one record flat file mapper that creates intermediate data load files. This means that Easy Loader supports single record Source data and creates single record flat data load files. All Target type connectors and schemas are predefined, which makes the user interface easy to learn.

Page 12: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 12

In Easy Loader, all events and actions required at run time are automatically created and hidden from view at design time. This is an advantage when the end user is not proficient with the Map Designer tool. The idea is that the designer will create most of the Map at design time, leaving the simple mapping of the source into the target to the end user. Below is a list of more Easy Loader advantages over Map Designer:

• predefined Targets and Target schemas • automatic addition of events and actions needed to run • simplified mapping view • auto-matching by name (case-insensitive; Map Designer is case sensitive) • single field mapping wizard launched from the Targets fields grid (use wizard to map all

fields, or just one field at a time) • predefined Target record validation rules • specific validation error logging for quick data cleansing

Easy Loader requires a pre-defined Structured Schema and it creates a tf.xml and a map.xml file that can be used in the same ways that Transformations built in Map Designer are used.

There are courseware and exercises on the Easy Loader in this document.

Process Designer

Process Designer is a graphical data transformation management tool that can be used to arrange a complete transformation project. Here are some of the Steps that a user can put into a process:

• Map Designer Transformation • SQL Command • Decision • RIFL Scripting • Command Line Application • SQL Server DTS Package • Sub-process • Validation • XSLT • Queue • Iterator • Aggregator • Invoker • Transformer

Once the user has organized these Steps in the order of execution, the entire workflow sequence can be run as one unit. This workflow is saved as an IP.XML file which can be run from the Process Designer or from Integration Engine.

Process Designer processes can also be packaged using the Repository Manager. This packaging gathers all of the files that are required by the process and puts them into a single DJAR file that can then be run from the Integration Engine.

This courseware covers some basic functionality of the Process Designer. Both the Advanced and the EDI/HIPAA courses go into the more advanced functionality of this tool.

Page 13: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 13

MetaData Tools With one exception (See the chapter on Extract Schema Designer), all of our design tools create their maps, schemas or processes as XML files. The Metadata tools organize and manipulate those files.

Repository Explorer

The Repository Explorer is the central location from which the user can launch all of the Designers, including the Map Designer, Process Designer, Join Designer, Extract Schema Designer, Structured Schema Designer, Source View Designer and Document Schema Designer.

The User can also open any Repository that has been created, and then open Transformations, Processes or Schema files in that Repository list.

The Repository Explorer can also access the version control functionality of CVS or Microsoft Visual SourceSafe, and can check files in and out of repositories using commands in Repository Explorer.

There is courseware about the Repository Explorer in this document.

Repository Manager

Repository Manager is designed to facilitate the tasks of managing large numbers of Pervasive design documents, contained in multiple repositories in multiple workspaces.

Repository Manager provides a single application to directly access any number of Pervasive design documents, view their contents, make simple updates, bundle them into a package, and generate reports.

The features of Repository Manager include:

• Open and work with any number of defined Workspaces. • Browse the hierarchy of Workspaces, Repositories, Collections, and Documents. • Search for documents based on text strings, regular expressions, date ranges, Document

Types, document-specific fields. • Make minor updates to documents. • Generate an impact analysis of proposed document modifications. • Import and export Documents and Collections. • Package Processes and related documents into a single entity (DJAR) that can be more easily

managed and transported. • View and print documents and Reports.

This document does not have exercises or courseware on Repository Manager, though there is an exercise in the Advanced course available from Pervasive Training Services.

Page 14: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 14

Production Tools These are the tools that allow the user to automate their Tranformations and Processes in their production environment.

Integration Engine

Integration Engine is an embedded data Transformation engine used to deploy runtime data replication, migration and Transformation jobs on Windows or Unix-based systems. Because Integration Engine is a pure execution engine with no user interface components, it can perform automatic, runtime data transformations quickly and easily, making it ideal for environments where regular data transformations need to be scheduled and launched. Integration Engine supports the following operating systems:

Windows 2000, Windows XP, Windows Server 2003, HPUX, Sun Solaris, IBM AIX, and Linux.

The Integration Engine has the capability to work with multiple threads if a multi-threaded license is purchased.

There is courseware about the Integration Engine in this document.

Integration Server

Integration Server is actually an SDK that is installed by default when the integration platform is installed. The core components of the Integration Server SDK are the Engine Controller, Engine Instances (Managed Pool), and the Client API that accesses the Engine Controller through a proxy. Server stability is maintained, scalability enhanced, and resources are spared through the use of a control-managed pool of EngineExe objects. This allows the Integration Engine to be called as a service.

This document does not have exercises or courseware on the Integration Server, though there is a one-day course available from Pervasive Training Services that covers the Integration Server and the Integration Manager.

Integration Manager

Through a browser-based interface, Integration Manager performs deployment, scheduling, on-going monitoring, and real-time reporting on individual or groups of distributed Integration Engines. Since all management is performed from a single administration point, Integration Manager improves operational efficiency in the management of geographically distributed Integration Engines. With the ability to remotely administer any number of integration points throughout the organization, customers can build out their integration infrastructure as required, using a flexible and scalable architecture designed for easy manageability. In other words, the Integration Manager allows the user to schedule and deploy multiple packages (DJAR) amongst multiple Integration Servers across an enterprise.

This document does not have exercises or courseware on the Integration Manager, though there is a one-day course available from Pervasive Training Services that covers the Integration Server and the Integration Manager.

Page 15: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 15

Installation Folders

The following screen-shots indicate the default locations and purposes for each of the installation folders.

The primary installation folder contains all the EXEs, DLLs, OCX, etc. that the software needs to run. Its default location is C:\Program Files\Pervasive\Cosmos\Common800:

The application also uses other system-generated folders. These are similar to INI files in other applications and contain application data. The information is stored in XML documents and the design tools access them for things like user Preferences in each Design tool. These files are specific to the username and can be found in the Documents and Settings folder:

Page 16: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 16

The work a user performs is stored in specification files in the Repository (which we will define next). These are XML files also and can be read with any Internet browser, as can the Settings files:

One other important folder is created by default when you select a Workspace. It is named “Workspace1” and contains metadata files that are created by certain design tools. For example, “CXL” scripts created by Extract Schema Designer are stored here. “RIFL” scripts, user-defined code modules, and user-defined connections are also saved into this folder by default.

Most importantly, this is the directory for the MacroDef file. We will discuss this file in detail in the Connectors And Connections module.

Page 17: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 17

Repository Explorer

The Repository Explorer is at the heart of the integration product design environment. In this central location, you can launch all of the Designers, including the Map Designer, Process Designer, Join Designer, Extract Schema Designer, Structured Schema Designer, Source View Designer and Document Schema Designer.

You can create multiple Repositories for any given Workspace. This allows you to separate your metadata how you wish. For example, you could have a Repository for a specific project and another Repository for a different project. You may also wish to create Repositories for Development, QA, and Production metadata as you promote your specification files from one to the next.

In the Repository Explorer, you can also access the version control functionality of CVS or Microsoft Visual SourceSafe in order to check your files in and out of repositories using commands in Repository Explorer.

IntegrationArchitect_RepositoryExp.ppt

Page 18: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 18

Define a Workspace and Repository

Keywords: Define the Training Workspace and Reposit ory

Start Repository Explorer

Change the current Workspace Root Directory

Select File>Manage Workspaces (Ctrl+Alt W). Change the Workspaces Root Directory to the “Cosmos_Work” folder. This will allow you to use a list of Repositories and Macro definitions specific to your current Workspace.

Modify the default Repository in current Workspace

Click on the Repositories button in the bottom right-hand corner of the Workspaces dialog box. When you change the Root Directory, a default Workspace and Repository will be created. We are going to modify the default for use during training. Change the name “XMLDB” to “Fundamentals” and navigate to the folder with the name “C:\Cosmos_Work\Fundamentals” by clicking the Find button.

We will use this Repository to store all of the XML schema and metadata for the training project.

Page 19: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 19

Configuring Database Connectivity (ODBC Drivers)

The exercises in the solutions folder of the Fundamentals training bundle are built using an ODBC connection, when the source or target is a relational database. Any relational database can be used in the classroom. To set up, the student must establish an ODBC connection called “TrainingDB” to their preferred database. Be aware that the login used for the connection must have sufficient permissions for creating and deleting tables in the database. This should NOT be a production database.

There is both a Pervasive SQL and an Access database provided in the training bundle. Either can be used. You can download a trial version of the Pervasive SQL Version 9 database at http://www.pervasive.com/downloads/index_data.asp. The Access database would require no additional software. When creating the ODBC connection, configure the ODBC DSN so that its name is “TrainingDB”. Point it to the “C:\Cosmos_Work\Fundamentals\Data\PSQLDB” folder to use the PervasiveSQL database or to the “C:\Cosmos_Work\Fundamentals\Data\TrainingDB.mdb” file if using the Access database.

Once the ODBC connection is established, then run the process called “CreateTrainingDB” that is included in the “C:\Cosmos_Work\Fundamentals\Data\Setup_Files\SetupProcess\” folder of the Fundamentals training bundle. This will create the tables and load the data required for the course exercises. If the PervasiveSQL or the Access database is used then this step is not necessary, as these databases have been preloaded.

While ODBC allows us to use a more flexible middleware connection to databases, please be aware that you will generally have better performance and more functionality if you use the native client interfaces instead of an ODBC driver.

Page 20: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 20

Splash Screen – Licensing and Version info

Description

• Splash Screen - Shows the Splash Screen for Repository Explorer. • Credits - Gives a list of credits for third party software components used by the Product. • Version - Displays the following sections:

o License Name: Displays the PATH to the Product License file and the License file name.

o Serial Number: Displays the Product serial number. o Version: Displays the Product build version number. o Subscription Ends: Displays the date the license file will expire. o Users: Displays the number of users licensed for the Product. o Single User License For:

� Name: Name of the person licensed for the Product. � Company: Name of the company licensed for the Product.

• Support - Displays the Technical Support address, phone/fax number, and web address. • Licensing - Displays all of the Connectors, Features and Products that are licensed in the

Product.

Page 21: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 21

Map Designer – Fundamentals of Transformation

Page 22: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 22

Map Designer – The Foundation

The Map Designer delivers the ease of an intuitive GUI for visually and directly mapping Source data to Target structures while allowing the user to manipulate the data in virtually limitless ways. The Map Designer tool enables the user to create the specifications for a transformation. A transformation reads one or more source files record by record, applies to each record whatever calculations, filters, checks, etc., are defined and then may write one or more records to one or more target files. The user employs a three-tab, graphical interface to describe the source(s), target(s) and processing logic. Source “connectors” describe the source file(s) and target “connectors” describe the target file(s).

IntegrationArchitect_MapDesigner.ppt

Page 23: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 23

Interface Familiarization

Objectives

The Map Designer icons offer you shortcuts when you are creating, modifying, and viewing maps. Here is information pulled from the Help File about the icons and their descriptions.

Description

Page 24: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 24

Page 25: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 25

Page 26: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 26

Page 27: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 27

Default Map

Objectives

At the end of this lesson you should understand the Source and Target tabs and be able to use the new Simple Map view to create a Transformation.

Keywords: Drag and Drop Mapping

Description

In this exercise we’ll follow the flow chart below and create a simple map.

Exercise

First we need to define our source.

1. Open Map Designer.

2. To the right of the long box next to “Source Connection”, click the down arrow. This will open the “Select Connection” Dialog box.

3. Notice there are three tabs.

The first time you open this it will open on the “Factory Connections” tab, but after the first time it will open on the “Most Recently Used” tab. We will discuss the “User Defined Connection” tab in a future exercise.

4. Choose the “ASCII (Delimited)” connector and click “OK”.

Page 28: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 28

5. To the right of the long box next to “Source File/URI”, click the down arrow. This will open a “Select File” dialog browser. We want to choose Accounts.txt in the C:\Cosmos_Work\Fundamentals\Data folder.

6. In the “ASCII (Delimited) Properties” box on the right side of the Source tab find the “Header” property and set it to “True”. Then click “Apply”.

Any time you make a change in the source or target properties, you will have to click “Apply”

7. Use the toolbar Icon to open the Source Data Browser.

If you see data there, then that confirms that you’ve connecter to your source.

8. Close the browser and click on the “Target Connection”.

Now we’ll be defining our target connection. Note that the Target Connection Tab is very similar to the Source Connection Tab

9. In steps two through four above we chose a source connector. Follow those steps again, using “Target” instead of “Source” where appropriate. This time we’ll chose “ACSII (Fixed)”.

10. In the “Target File/URI” drop down browse to the C:\Cosmos_Work\Fundamentals\Data folder. Type in “Accounts_Fixed.txt”. Then click “Open”.

We’re now connected to our target.

11. Now Click on the “Map” tab.

12. If you see four quadrants on this page, then your are set to the “Power Map View” and you’ll need to follow the next step. If not, you can skip to step 16.

13. From the “View” menu on the menu bar, choose “Preferences”. Click on the “General” tab. Un-check where it says “Always show power map view”.

We will be working in the Power Map view later in the course, but for now, we will use the Simple Map View.

Page 29: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 29

14. Click on the “Simple Map View” icon in the toolbar.

15. You may have to drag the asterisk from the box next to “All Fields” in the source and drop it under the Target field name header.

16. Notice that the target has been filled out with fields identical to the source, and that the “Target Field Expressions” are filled out as well. Validate the Transformation using the

check mark icon on the toolbar.

17. You should get a pop up box that says something like “Map1.map.xml is valid”. Click “OK”.

18. Save the Map as “DefaultMap” in the C:\Cosmos_Work\Fundamentals\Development folder. Then click the “Run Map” Icon.

19. Click the “Target Data Browser” and note your results.

There follows some information taken from reports generated by Repository Manager from the DefaultMap transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Target (ASCII (Fixed))

location $(funData)AccountsFixed.txt

outputmode Replace

Page 30: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 30

Map Expressions

R1.Account Number Records("R1").Fields("Account Number")

R1.Name Records("R1").Fields("Name")

R1.Company Records("R1").Fields("Company")

R1.Street Records("R1").Fields("Street")

R1.City Records("R1").Fields("City")

R1.State Records("R1").Fields("State")

R1.Zip Records("R1").Fields("Zip")

R1.Email Records("R1").Fields("Email")

R1.Birth Date Records("R1").Fields("Birth Date")

R1.Favorites Records("R1").Fields("Favorites")

R1.Standard Payment Records("R1").Fields("Standard Payment")

R1.Payments Records("R1").Fields("Payments")

R1.Balance Records("R1").Fields("Balance")

Page 31: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 31

Connectors and Connections – Accessing Data

The connectors are how we read and write data using Map Designer. They are an integral part of the software in that all of the low-level, complex data access programming has been abstracted to a simple form for the user to complete by using drop-down menus and pick lists.

Page 32: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 32

Factory Connections

Objectives:

At the end of this lesson you will be able to find and use the appropriate data access Connector.

Keywords: Connectors List, Connection Menu, and Sou rce Connection tab

Description

Factory Connections contains a list of all of the Connectors available to you in Map Designer. Type the first letter of a Connector name to jump to that Connector in the list (or the first one in the list with that letter). For instance, you want to choose Btrieve v7. Type "B", and BAF will appear. From there, you can scroll down to Btrieve v7 and select it.

The Map Designer Connector Toolbar offers you shortcuts to this dialog. Here are the icons and their descriptions:

New - Allows you to clear the Source tab and define a new source connection.

Open Source Connection – Allows you to open the Select Connection dialog to access the:

o Most Recently Used Tab o Factory Connections Tab o User Defined Connections Tab

Page 33: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 33

You can elect to Save your Source Connection as a sc.xml file. The advantage of doing this is that you can reuse the Connection in any subsequent Map design in the future. The sc.xml file saves the Source Connector and any properties you have set.

Source Connector Properties - opens the Source Properties dialog box. These are the same properties available via the Source Connection tab, and are dependent upon the Connector to which you are connected. This icon will be active only when you are on the Map tab.

Page 34: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 34

User Defined Connections

Objectives

At the end of this lesson you will be able to define and reuse a User Defined Connection.

Keywords: User Defined Connections, Connection Dial ogs, and Source Connection tab

Description

These are Connections that you create when you save a Source Connection or a Target Connection. The Connections are saved as either a “.sc.xml” (Source) or “.tc.xml” (Target) file in your Workspace/Connections directory. User Defined Connections are reusable, and you can create as many as you want.

Exercise

1. Reopen the Transformation we built previously named “DefaultMap.map.xml” and view the Source Connection tab.

2. Using the Connector Toolbar to the far right side of the Connection field, click the “Save” button (diskette).

3. Save the source connection as “myAccounts.sc.xml”.

4. Close the current Map and begin a brand new map design.

5. On the Source Connection Toolbar click the Open button (file folder) and the User Defined Connections tab:

6. Select the User-Defined connection you created previously and you are ready to move to the next exercise.

Page 35: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 35

Macro Definitions

Objectives

At the end of this lesson you will be able to define and use Macros in connection strings.

Keywords: Macro Definition, Workspace

Description

We will create a new macro that we can use to represent the Data sub-directory for our Training Repository. This will allow us to port the schema files more readily from one workstation to another or deploy to servers for execution by Integration Engine.

Exercise

From within the Transformation Map Designer:

1. Select the menu item Tools>Define Macros. Notice there is already a macro that is set to the default location of the current Workspace.

2. Click New.

3. Enter a Macro Name value as funData.

4. Click the Macro Value drop-down button and navigate to our workspace and highlight the “C:\Cosmos_Work\Fundamentals\Data” folder.

5. Click OK.

6. Add a slash “\” to the end of the macro value.

7. Enter a description if you wish and click OK.

8. Now we can use the syntax $(funData) to represent the entire path to the Data folder.

9. Highlight the portion of the connection string you wish to replace.

10. From the menu bar, select Tools>Paste Macro String .

11. Click on the row of the Macro you want to use (e.g., Data)

Page 36: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 36

Root Macro

If you will be selecting files from the same directory, or parent directory often you can set the “Root Macro” for automatic substitution.

Highlight the Macro you want to use as the root directory and click the “Set as Root” button.

Also, be sure to set the automatic substitution switch in Map Designer > View > Preferences > Directory Paths:

Page 37: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 37

Automatic Transformation Features

Describes certain features for manipulating data that are built into the Map Designer.

Page 38: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 38

Source Data Features – Sort

Objectives

At the end of this lesson you should be able to apply a sorting function to your source data.

Keywords: Sorting Source Records, Source Filter Win dow, Sample Size, and Target Filter Window

Description

We view sorting in our transformations from two angles. First, it is often necessary that the target file be in a certain order. While this doesn’t usually matter in database targets, it can be essential when other file structures are being produced. Secondly, and perhaps more importantly, transformations may be designed much more efficiently if we can rely on the source file being in a certain sequence. Assume that you have a code of some sort in each source record and that you must do a lookup or some complicated processing using that code. If the source file were in code sequence, we could perform this logic only once for each code, and then save and use the results until a new code was encountered.

At the outset, we realize that it is not possible to sort the target file itself in the transformation that creates it. Transformations write target records one at a time, not all at once, and so a direct sort of the target file would have to be an entirely separate transformation.

However, it is quite possible to sort the source file before it is processed, and doing so will achieve either of the requirements for sorting mentioned above. If the source file is already in the sequence needed for the target, then writing the target one record at a time is no problem. Also, we could sort the source file into a sequence that would enable us to minimize processing time.

To sort the source file before processing, we simply use the Source Keys and Sorting dialog. In this dialog we can specify the field(s) by which we want to sort the source file before processing. We can even sort on a constructed or calculated value. We should realize, however, that when we use the Source Data Browser to view the file, we will not see it in its sorted order, since the sorting is performed once the transformation begins and is done dynamically, in memory. The original file is not changed.

Sorting has its own overhead. Extremely large files can take a long time to sort. If this time becomes a factor, then other strategies may need to be employed. But the benefits gained from having the source file in sequence can be even greater. As we will learn in later lessons, sorted source files are a requirement for “on data change” processing- one of the processing strategies that can dramatically reduce the execution time of a transformation.

Exercise

From within the Transformation Map Designer:

1. Connect to the Ascii Delimited file, Accounts.txt as your source.

2. Set Header to true and apply as we have done previously. (You can do both these steps in one by using the User-Defined connection myAccounts.sc.xml that we created in a previous exercise.)

3. Click the Source Keys and Sorting button in the toolbar.

4. On Sort Options Tab, click in the Key Expression box to see the down arrow. Click on the down arrow. Choose the State field to use as a key. (Note that you could choose

Page 39: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 39

Build if you want to build a key using an expression to parse out or concatenate parts of different fields. Note also that The sort will default to Ascending order. If you would prefer to sort in descending order, click on the down arrow and select "Descending" from the dropdown list.)

5. Create a target connection to an Ascii Delimited file called AccountsSortedbyState.txt. This file doesn’t yet exist, so you’ll have to type in the file name.

6. Set header to true and apply.

7. Go to the Map Step.

8. Validate the Map.

You may see a dialog box that looks like this. We will go into greater detail on the “Default Event Handler” and Event Handlers in general later in this courseware.

9. Click “OK” to accept the Default Event Handler.

10. Save this Map as “SourceDataFeatures_Sort” in the “Development” folder.

11. Run the Map.

12. Notice Results in Status bar.

13. Open the Target Data Browser and notice that the records are now sorted by state.

There follows some information taken from reports generated by Repository Manager from the SourceDataFeatures_Sort transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Page 40: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 40

Sort Fields

Fields("State") type=Text, ascending=yes, length=16

Target (ASCII (Delimited))

location $(funData)AccountsSortedbyState.txt

TargetOptions

header True

outputmode Replace

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.Account Number Records("R1").Fields("Account Number")

R1.Name Records("R1").Fields("Name")

R1.Company Records("R1").Fields("Company")

R1.Street Records("R1").Fields("Street")

R1.City Records("R1").Fields("City")

R1.State Records("R1").Fields("State")

R1.Zip Records("R1").Fields("Zip")

R1.Email Records("R1").Fields("Email")

Page 41: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 41

R1.Birth Date Records("R1").Fields("Birth Date")

R1.Favorites Records("R1").Fields("Favorites")

R1.Standard Payment Records("R1").Fields("Standard Payment")

R1.Payments Records("R1").Fields("Payments")

R1.Balance Records("R1").Fields("Balance")

Page 42: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 42

Source Data Features – Filter

Objectives

At the end of this lesson you should be able to apply simple filters to your source data.

Keywords: Source Filter Window, Sample Size, and Ta rget Filter Window

Description

There are two ways to restrict the target file to contain only certain source records. The most flexible way is to supply processing logic in the body of your transformation. Using this approach, discussed in later lessons, you have nearly unlimited control and can do almost anything. You can implement any desired business rules. For example, you might want to exclude from the Target file any account records with invalid Zip codes.

The second way to restrict the number of Source records that are placed in the Target file is to use a filter. You can do almost anything in a filter than you can do in processing logic, but the virtue of filters is that they are usually easier to establish, change and remove.

For example, you may be testing a new Transformation against a file with more than a million records. You have a complex calculation that needs to work properly, but you won’t be able to tell if it is working until you look at the very first record in the Target file. Does this mean you have to process all the records just to see the results for the first one?

No. Filters are there for this kind of situation. A source or target filter is a simple criterion that determines whether a source record is to be processed or a target record written. The user has the option of using one of four methods to test each source or target record to see if it should be processed or written. You may (1) process/write only the first N records, (2) process/write all records from record number X to record number Y, (3) process/write every Nth record or (4) supply an expression (more about expressions later) which, if evaluated to a “true” result, causes the record to be processed/written. All these options are controlled through the Source Filters and Target Filters dialogs.

The user can use either type of filter or even both types at once. Using both types in the same transformation, however, requires some thought. If your objective is to obtain a target file with 100 records, you can use either a source or target filter. You will get the result you want, but only if you do not bypass any records in your own processing logic. As another example, if you filter a 5000-record source file to process only the first 1000 records, and then also supply a target filter to write every 10th record to the target, you will only get 100 target records, not 500. The target filter will be applied to those source records that make it through the source filter.

As in sorting, filtering is performed dynamically when the transformation runs; source filter results are not shown when the Source Data Browser is used.

Exercise

From within the Transformation Map Designer:

1. Connect to the Ascii Delimited file, Accounts.txt as your source.

2. Set Header to true and apply as we have done previously. (You can do both these steps in one by using the User-Defined connection myAccounts.sc.xml that we created in a previous exercise.)

3. Click the Source Filters button in the toolbar.

Page 43: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 43

4. Note the radio buttons in the bottom of the window where it says “Define Source Sample”. We could choose a range of records. We could choose to process ever Nth record from the source. (The behaviour of this is that you always get the first record, then every Nth record like so, 1, N+1, 2N+1, 3N+1…)

5. In this case, we want our filter to bring in only the records from Texas. We will use the “Source Record Filtering Expressions” box. This allows us in RIFL Scripting Language (see The RIFL Script Editor chapter) to enter a statement that will evaluate to True or False. We will process the records that evaluate to true. In the “Source Record Filtering Expressions” box, let’s type. “ Records("R1").Fields("State") == "TX" ”.

6. Create a target connection to an Ascii Delimited file called AccountsinTX.txt. This file doesn’t yet exist, so you’ll have to type in the file name.

7. Set header to true and apply.

8. Go to the Map Step.

9. Validate the Map.

You may see a dialog box that looks like this. We will go into greater detail on the “Default Event Handler” and Event Handlers in general later in this courseware.

10. Click “OK” to accept the Default Event Handler.

11. Save this Map as “SourceDataFeatures_Filter” in the “Development” folder.

12. Run the Map.

13. Notice Results in Status bar.

14. Open the Target Data Browser and notice that there are only records from Texas.

There follows some information taken from reports generated by Repository Manager from the SourceDataFeatures_Filter transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Accounts.txt

Page 44: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 44

SourceOptions

header True

Filter Expressions

Records("R1").Fields("State") == "TX"

Target (ASCII (Delimited))

location $(funData)AccountsinTX.txt

TargetOptions

header True

outputmode Replace

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.Account Number Records("R1").Fields("Account Number")

R1.Name Records("R1").Fields("Name")

R1.Company Records("R1").Fields("Company")

R1.Street Records("R1").Fields("Street")

R1.City Records("R1").Fields("City")

Page 45: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 45

R1.State Records("R1").Fields("State")

R1.Zip Records("R1").Fields("Zip")

R1.Email Records("R1").Fields("Email")

R1.Birth Date Records("R1").Fields("Birth Date")

R1.Favorites Records("R1").Fields("Favorites")

R1.Standard Payment Records("R1").Fields("Standard Payment")

R1.Payments Records("R1").Fields("Payments")

R1.Balance Records("R1").Fields("Balance")

Page 46: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 46

Target Output Modes - Replace, Append, Clear and Ap pend

Objectives

At the end of this lesson you should be able to understand and implement each of the target output modes: Replace, Append, and Clear Append.

Keywords: Output Mode, Replace Mode, Append Mode, C lear and Append Mode, and Schema Mismatch

Description

The target output mode “Replace” is used in two situations. In the first, the file or table does not yet exist, and in this case Map Designer creates it using the layout you have specified on the Map tab. In the second situation, where the file or table already exists, the replace mode deletes the file (or drops the table) first, and then recreates it using the layout you have specified on the Map tab.

The target output mode “Append” adds additional rows to a target file or table that already exists. If you are working with flat files as your targets, then the only available output modes are “Replace” and “Append.”

The issue is different when dealing with database tables. Database tables can have indexes and constraints built into them and there is a critical difference between “Replace” and “Clear and Append.” That difference is that the “Replace” mode effectively “drops” the table and then recreates it, whereas the “Clear and Append” mode “truncates” the table only. When you drop a table, you also drop any indexes or constraints that the table might have, while truncation preserves them.

You can use “Clear and Append” if the table doesn’t exist (it will simply be created), though you will usually be picking an existing table from a dropdown of the tables that the database contains. As soon as you pick a table the Target Output Mode will change to “Append” and the structure of the table will be filled in on the Map Tab. You can then change the Output Mode to “Clear and Append” and map your target fields on the Map Tab.

Exercise

From within the Transformation Map Designer:

1. Connect to the Ascii Delimited file, Accounts.txt as your source.

2. Set Header to true and apply as we have done previously. (You can do both these steps in one by using the User-Defined connection myAccounts.sc.xml that we created in a previous exercise.)

3. Create a target connection to the TrainingDB database that we have set up previously. The table is called tblAccounts. Note that when we connected to this table, because it already existed, our output mode was automatically set to “Append”. Let’s set it to “Replace”.

4. Go to the Map Step.

Note that in this case we already have target fields defined. This is metadata (Field names, Field lengths, Datatypes) that is coming in from the database. Notice also that some fields are mapped and some are not. The Simple Map view does an automatic “Match by name” that pulls in field names that are exact matches from source to target. We will have to do the rest by hand.

5. For the “AccountNumber” field we click inside the target field expression, then click the down arrow.

Page 47: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 47

6. We can then choose “Account Number” (note the space that is not there in the target field. That’s why “Match by Name” failed).

7. Now we do the same for each of the remaining fields. Look at the charts below for specific mapping if needed.

8. Alternatively we could have done a right click in the “AccountNumber” Target Field Expression and clicked on “Match by Position”. In this case, we would have mapped all of our source fields into the target fields correctly. That will not always be the case, however.

9. Click the Run button.

10. Accept the Default Event Handler

11. Notice Results in the Target Data Browser. Note the number of records in the table.

12. Now let’s go back to the Target Connection Tab and set the OutPut Mode to “Append”.

13. Click the Run button.

14. Notice Results in the Target Data Browser. Note the number of records in the table.

15. Now change the Output Mode to Clear File/Table contents and Append.

16. Run the map and note the results.

17. Save this map as OutputModes_Clear_Append

There follows some information taken from reports generated by Repository Manager from the OutputModes_Clear_Append transformation in the Solutions folder (Note that there are also reports for the OutputModes_Replace and OutputModes_Append maps, but they are identical except for output mode):

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ODBC 3.x)

Database TrainingDB

table tblAccounts

Page 48: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 48

Outputmode Clear File/Table contents and Append

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.AccountNumber Fields("Account Number")

R1.Name Fields("Name")

R1.Company Fields("Company")

R1.Street Fields("Street")

R1.City Fields("City")

R1.State Fields("State")

R1.Zip Fields("Zip")

R1.Email Fields("Email")

R1.BirthDate Fields("Birth Date")

R1.Favorites Fields("Favorites")

R1.StandardPayment Fields("Standard Payment")

R1.LastPayment Fields("Payments")

R1.Balance Fields("Balance")

Page 49: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 49

Target Output Modes – Delete

Objectives

At the end of this lesson you should be able to understand and implement the target output mode: Delete.

Keywords: Output Mode, Delete

Description

The Delete mode is only available when your target is a relational database or an ODBC data Source. When you select "Delete From File/Table", Map Designer will search Target data for a match in a key field or fields which you have defined. Therefore, when you select Delete File/Table, you must also define a key using the Target Keys/Index window.

When you want to delete specific records from an existing table, you should use the target output mode Delete. Using the Delete mode requires that at least one field in the existing target contains values that match those in one field in the source file. For example, the existing target file may have a ShippingMethodCode while the source file has a field called ShipCode that contains the same possible values.

Since the target table already exists, as soon as you specify it and set the output mode to Delete on the Target Tab, you will find the target file’s fields listed on the Map Tab. Cosmos assumes that the first Target field is the key field. If there are additional key fields, highlight them, right-click in the highlight, and choose the “Set as Action Key” option. (If you need to remove a key, simply highlight the field, right-click in the highlight and choose the “Unset Action Key” option. Next, you must map values to each of the Action Key fields. These will usually be Source fields, but they might be calculated fields. It is not necessary to map any other fields. Finally, you use the Target Keys and Output Mode Options button to specify whether you want to delete all matching records from the target or just the first one found.

Whenever a ClearMapPut is performed, the contents of the key field(s) in the target buffer is compared to all records in the target file, and either the first match or all matches are deleted.

Exercise

From within the Transformation Map Designer:

1. Connect to the Ascii Delimited file, InactiveAccounts.txt as your source.

2. Set Header to true and apply as we have done previously.

3. Create a target connection to the TrainingDB database that we have set up previously. The table is called tblAccounts. Note that when we connected to this table, because it already existed, our output mode was automatically set to “Append”. Let’s set it to “Delete”.

4. On the Map Step, we see that the “Name” and “Company” fields have been automatically mapped. This is actually not helpful to us. Let’s take them out.

5. Note that “AccountNumber” was automatically set as our key field. Let’s map that from the source field “Account Number”.

6. Validate the map.

7. Accept the Default Event Handler.

Page 50: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 50

8. Click the Run button.

9. Notice Results in the Target Data Browser. Note the number of records in the table.

10. Be aware that you will only see results the first time you run the Map. This is because we will remove the matching records the first time and they will no longer exist. You will need to load the original source records into the target table before you run the Delete Mode map a second time. Assuming you correctly ran the previous Map in “Clear and Append” mode, you can run it again to prime the table.

There follows some information taken from reports generated by Repository Manager from the OutputModes_Delete transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)InactiveAccounts.txt

SourceOptions

header True

Target (ODBC 3.x)

Database TrainingDB

table tblAccounts

Outputmode Delete

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.AccountNumber Fields("Account Number")

Page 51: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 51

Target Output Modes – Update

Objectives

At the end of this lesson you should be able to explain exactly what the Target Output Mode “Update” does. You should also be able to write a transformation that will update specific records in an existing file or table.

Keywords: Output Mode, Update File and Schema Misma tch

Description

The Update mode is only available when your target is a relational database or an ODBC data Source. When you select Update File/Table, Map Designer will search Target data for a match in a key field or fields that you have defined. So, when you select Update File/Table, you must also define a key using the Target Keys/Index window.

The Target Output Mode “Update” is similar in operation to the “Delete” mode. The user must indicate which of the current target fields will be used as the key to identify records to be updated. Each of these key fields must have a mapping expression (usually a Source file field). And you must determine whether, if the target table may contain records with duplicate keys, you wish to update all of them or just the first one found. Also as in “Delete,” when a ClearMapPut is performed, the transformation tries to match the mapped key values to each record in the current target.

Unlike Delete, however, if it finds a match then the options set in the Target Keys and Output Mode Options dialog control whether and how an update is performed. You may update just the first matching record found or all of them (if the target may contain records with duplicate keys). For each of those options, you may decide to insert new records (ones that don’t match any record in the target) or not. Finally, you can ignore matching records and simply insert those that don’t match any record currently in the Target file.

Finally, and most importantly, you must specify in your design which fields will be updated, and with what new values, when a match is found. Your options are to update each Target field with the current value of its Target Field Expression (even if that result is null values) or to just update the fields that actually have Target Field Expressions. This is faster if the number of fields you need to update is a small subset of all the fields in the Target file.

Mapping (setting the “target field expressions”) plays a much more critical role in Update than in Delete (where no mapping other than the key fields has any meaning). In Update, you can choose to update all the fields or just the ones with expressions. You should be careful, however, when choosing the “update all fields” option. Although you may want to do this, it is not the common practice, so you will have to click the radio button in the “Target Keys, Indexes and Options” dialog box that is marked “Allow null values to overwrite data in target fields”. When you do, fields that don’t have expressions won’t simply be left alone- they will be cleared.

Exercise

From within the Transformation Map Designer:

1. Connect to the Ascii Delimited file, AccountsUpdate.txt as your source.

2. Set Header to true and apply as we have done previously.

3. Create a target connection to the TrainingDB database that we have set up previously. The table is called tblAccounts. Note that when we connected to this table, because it

Page 52: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 52

already existed, our output mode was automatically set to “Append”. Let’s set it to “Update”.

4. Go to the Map Step.

Note that in this case we already have target fields defined. This is metadata (Field names, Field lengths, Datatypes) that is coming in from the database. Notice also that some fields are mapped and some are not. The Simple Map view does an automatic “Match by name” that pulls in field names that are exact matches from source to target. We will have to do the rest by hand.

5. For the “AccountNumber” field we click inside the target field expression, then click the down arrow.

6. We can then choose “Account Number” (note the space that is not there in the target field. That’s why “Match by Name” failed).

7. Now we do the same for each of the remaining fields. Look at the charts below for specific mapping if needed.

8. Alternatively we could have done a right click in the “AccountNumber” Target Field Expression and clicked on “Match by Position”. In this case, we would have mapped all of our source fields into the target fields correctly. That will not always be the case, however.

9. Note that “AccountNumber” was automatically set as our key field.

10. Open the “Target Keys, Indexes and Options” dialog box. Note all the options that are possible using Update Mode. In this case the defaults, “Update all matching records and insert non-matching records” and “Update only mapped fields” will take care of us. Although the “Update All fields” would give us the same results as we have mapped all fields.

11. Click the Run button.

12. Accept the Default Event Handler

13. Notice Results in the Target Data Browser. Note the number of records in the table.

14. When we run this map we will be updating the records, so unless you restore the table to its original contents before you run the map again, you won’t see any change. You can just run the map we created for the Clear and Append mode and then run the Delete mode map before re-running this one.

There follows some information taken from reports generated by Repository Manager from the OutputModes_Delete transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)AccountsUpdate.txt

SourceOptions

header True

Page 53: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 53

Target (ODBC 3.x)

Database TrainingDB

table tblAccounts

Outputmode Update

Key Field AccountNumber

Update Mode Options Update ALL matching records and insert non-matching records.

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.AccountNumber Fields("Account Number")

R1.Name Fields("Name")

R1.Company Fields("Company")

R1.Street Fields("Street")

R1.City Fields("City")

R1.State Fields("State")

R1.Zip Fields("Zip")

R1.Email Fields("Email")

R1.BirthDate Fields("Birth Date")

R1.Favorites Fields("Favorites")

R1.StandardPayment Fields("Standard Payment")

Page 54: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 54

R1.LastPayment Fields("Payments")

R1.Balance Fields("Balance")

Page 55: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 55

The RIFL Script Editor

The RIFL Script Editor is the location where you can write your own scripts (expressions) to include with your Transformations.

This Editor includes a list of all of the functions available to you in the Rapid Integration Flow Language (RIFL). In addition, it gives you the syntax for each function. Examples for each function are included in the help files. The RIFL Script Editor allows a user to use point and click and drag and drop with very little typing to create accurate and valid RIFL scripts to manipulate and validate data during transformation.

IntegrationArchitect_RIFL_ScriptEditor.ppt

Page 56: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 56

RIFL Script - Functions

Objectives

At the end of this lesson you should be able to use one-line RIFL scripts that are simple function calls or pre-defined functions like NamePart, DateValMask, or concatenating two source fields together into a single target field. Local variables, line continuation and comments will also be discussed.

Keywords: Function Builder, Len, Trim, NamePart, Da tevalmask, comments, continuation character.

Description

In this exercise we will manipulate our data as we run the Transformation. To do this we will use RIFL in the Target Field Expressions of the fields that we want to manipulate. We’ll work with both the Field Mapping Wizard and the RIFL Script Editor. We’ll be working with the name and the date field.

Exercise

From within the Transformation Map Designer:

1. Connect to the Ascii Delimited file, Accounts.txt as your source.

2. Set Header to true and apply as we have done previously.

3. Create a target connection to the TrainingDB database that we have set up previously. The table is called tblAccounts. Note that when we connected to this table, because it already existed, our output mode was automatically set to “Append”. Let’s set it to “Clear File/Table contents and Append”.

4. Go to the Map Step.

5. Map all fields as we have before except for the “Name” and the “BirthDate” fields.

The first field that we’ll work with is the “Birthdate” field. In our source, the birth date field has string data in it that looks like this “11/12/1975”. Now what we don’t know, just from looking at that string is this, is that “November 12, 1975” or “December 11, 1975”? We don’t know and the database doesn’t know either. Most databases will not accept a string value into a date or datetime field. We will have to tell the database that the date we’re working with is actually “November 12, 1975”. We do that with the RIFL function, Datevalmask, in the Target Field Expression.

6. Click on the “Wizard” icon in the “BirthDate” field.

7. In the lower right of the Field Mapping Wizard we’ll Choose Matching Source Field by clicking the down arrow. Choose “Birth Date” and click next.

8. Choose the “Needs additional TRANSFORMATION” radio button and click next.

9. From the “Transformation Function List” dropdown choose Datevalmask. (You can save time scrolling through this list by clicking the first letter of the function you want on the keyboard.) Click next.

10. Click the ellipsis in the “DateString” area.

Page 57: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 57

11. The RIFL Script Editor pops up. In the lower left pane click on Source R1. Then in the lower right pane double click “Birth Date”. Click “OK”.

12. In the “Mask” area, type in “mm/dd/yyyy”. Then click OK. Then “Next” on the wizard and “OK” on the pop up.

Masks are used in many RIFL functions. The only way to know what values to put into those masks is to look in the Help files. Just open the Help files and use the index to find the particular RIFL function you may be using.

The next field we’ll work with is the “Name” field. The source data names are in this format, First Middle Last. A sample from the first record is George P Schell. In this Map we want our target names in this format, Last, First Middle Initial. Like this: Schell, George P. In the previous part of this exercise we used the Field Mapping Wizard. Now we’ll go directly to the RIFL Script Editor.

13. Left click in the “Name” field. Then left click on the ellipsis. This bypasses the wizard and goes straight to the RIFL Script Editor. If there is not an ellipsis, there will be a drop down arrow. Click that and choose “Build Expression”.

14. On the toolbar on the top click the icon on the far right “Hide Expression Tree”. This gives us more room in the Editor window.

15. Delete the Fields(“Name”) value or any other value in the Editor pane so that it’s blank.

16. In the lower right pane scroll down to the “NamePart” function. Do a single left click on it. Notice that there is a short description of the function in the lowest right portion of the RIFL Script Editor. There is also the syntax of the function with descriptive placeholders for the parameters in the lowest left.

17. Double click the “NamePart” function.

Page 58: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 58

18. In the editor window select the “Mask” parameter. Type in “l”. That’s a lower case L in double quotes.

19. Select the “Name” parameter. Pull in the source field “Name” as we did above for the “Birth Date”. (See step 11.)

20. If we left this function as is, it would return the last name from the source data. We need

more than that, though. Let’s type or click the concatenation icon to put an ampersand after our function.

21. We can see that the RIFL Script Editor will do a lot of the work for us. Let’s use what we’ve learned to finish this script:

NamePart(“l”, Records(“R1”).Fields(“Name”)) & “, “ & _

NamePart(“f”, Records(“R1”).Fields(“Name”)) & “ “ & _

NamePart(“mi”, Records(“R1”).Fields(“Name”))

For logic purposes this script would need to be all one line. We use the space and the underscore characters as a continuation that allows us to move to the next line to make the script easier to read.

22. Close the RIFL Script Editor and save this Map as “RIFL_ScriptFunctions” in the development folder.

23. Run the Map and note the results.

There follows some information taken from reports generated by Repository Manager from the RIFLScript_Functions transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ODBC 3.x)

Database TrainingDB

table tblAccounts

Outputmode Clear File/Table contents and Append

Page 59: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 59

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.AccountNumber Fields("Account Number")

R1.Name NamePart("l", Records("R1").Fields("Name")) & ", " & _

NamePart("f", Records("R1").Fields("Name")) & " " & _

NamePart("mi", Records("R1").Fields("Name"))

R1.Company Fields("Company")

R1.Street Fields("Street")

R1.City Fields("City")

R1.State Fields("State")

R1.Zip Fields("Zip")

R1.Email Fields("Email")

R1.BirthDate datevalmask(Fields("Birth Date"), "mm/dd/yyyy")

R1.Favorites Fields("Favorites")

R1.StandardPayment Fields("Standard Payment")

R1.LastPayment Fields("Payments")

R1.Balance Fields("Balance")

Page 60: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 60

RIFL Script – Flow Control

Objectives

At the end of this lesson you should be able to use multi-line RIFL scripts that utilize one of the flow-control structures like an If-Then-Else statement.

Keywords: Flow Control, If then Else, Discard, IsDa te, and DateValMask Functions, Editor Properties

Description

Flow Control is the management of Data flow. As used in RIFL, it is the management of where and/or how a particular piece of source data is mapped into the target. The most commonly used Flow Control function is the If Then Else statement. The logic goes like this:

If this statement about my data is true then

Do this particular thing.

Else

Do this other thing.

End if

There are other more complex Flow Control functions, but this one will get us started.

In this exercise we want to look at our source dates and check them to see if they are valid dates. If the date for a record is valid then we’ll send the record to the target. If the date is not valid, we’ll send a message to our log file and we’ll discard that record so it doesn’t go into our target.

So far in this course the exercise steps have been very detailed. They will not be so detailed from here on. We assume that when we say for instance “Open the RIFL Script Editor” that you no longer need to be told exactly where to click.

Exercise

1. Create a new Map and connect to the source and target listed below.

2. On the Map tab, map all fields as before except for the “Birth Date” field.

3. Open the RIFL Script Editor in the “Birth Date” field.

This is a good time to set a property in our RIFL Script Editor that will make things easier for us as we move along. We can set the editor to show a line number for each line of our scripts.

4. On the menu bar choose “View”, then “Editor Properties”. Click on the “Misc” tab. In the lower left see “Line numbering”. In the “Style” dropdown choose “Decimal”. Change the “Start at” to 1. Click “OK”.

5. Now in the lower left pane of the RIFL Script Editor, above “<All Functions>” click on “Flow Control”. In the lower right pane, double click on “If…Then…Else”.

Page 61: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 61

Notice that the RIFL Script Editor puts the syntax for the If Then Else Statement into the editor window for us. We would replace “condition” with a statement that would evaluate to true or false. “statement block one” would be what we do if the statement is true. Then “statement block two” is what we do if the statement is false.

6. Let’s now enter the following script, replacing what we have in the editor.

Dim A

A = Records("R1").Fields("Birth Date")

If Isdate(A) then

datevalmask(A, "mm/dd/yyyy")

Else

Logmessage("Warn", "Account Number " & Records("R1").Fields("Account Number") & _

" has an invalid date: " & A)

Discard()

End if

Page 62: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 62

Line 1 declares a local variable “A” that will be available to us only in this script.

Line 2 sets that “A” variable to the value contained in the “Birth Date” field in the source.

Line 4 uses the IsDate RIFL function to test the incoming string value to see if it can be converted to a valid date.

Line 5 converts that date for use in the target.

Lines 7 and 8 are a LogMessage function. Note the continuation characters at the end of line 7. The first parameter of a LogMessage function is always either “Info”, “Warn”, “Error”, or “Debug”. The second parameter is whatever you want to write to your log file. In this case we have a combination of literal strings and data coming from the source.

Line 9 is the Discard function that causes this source record not to be written to the target.

7. Let’s click the “Validate” icon. We should see “Expression contains no syntax errors.” At the bottom of the RIFL Script Editor. Click “OK”.

8. Validate and save this map as “RIFLScript_FlowControl”.

9. Note results in the target. Note only 201 records in the target.

10. Click on the “View TransformMap.log” icon. Note the results of our LogMessage functions.

There follows some information taken from reports generated by Repository Manager from the RIFLScript_FlowControl transformation in the Solutions folder:

Page 63: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 63

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ODBC 3.x)

Database TrainingDB

table tblAccounts

Outputmode Clear File/Table contents and Append

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.AccountNumb

er

Fields("Account Number")

R1.Name Fields("Name")

R1.Company Fields("Company")

R1.Street Fields("Street")

R1.City Fields("City")

R1.State Fields("State")

R1.Zip Fields("Zip")

Page 64: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 64

R1.Email Fields("Email")

R1.BirthDate Dim A

A = Records("R1").Fields("Birth Date")

If Isdate(A) then

datevalmask(A, "mm/dd/yyyy")

Else

Logmessage("Warn", "Account Number " & Records("R1").Fields("Account

Number") & _

" has an invalid date: " & Records("R1").Fields("Birth Date"))

Discard()

End If

R1.Favorites Fields("Favorites")

R1.StandardPay

ment

Fields("Standard Payment")

R1.LastPayment Fields("Payments")

R1.Balance Fields("Balance")

Page 65: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 65

Transformation Map Properties

The property-sheet tool bar button accesses the Properties dialog for all global settings.

You can affect many areas or the Transformation Map including log file settings, runtime execution properties, error handling and define external code-modules.

Page 66: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 66

Reject Connection Info

Objectives

At the end of this lesson you should be able to understand and implement a Reject File.

Keywords: Reject function, Reject Connect Info, and the OnReject event handler.

Description

The Reject Connect Info dialog allows you to specify your reject file. You can type your connection string, or you can use the buttons to Build New Connection String,, Build Connection String from Source, Build Connection String from Target or Clear Reject.

Exercise

1. Using the previous Map, change the Discard() function call to a Reject() function call.

2. Go to the Map Properties dialog and click Build Connection String from Source.

3. Change the file name portion of the connect string to read “BadDateRejects.txt”.

4. Using the Target Event Handler “OnReject”, set a “ClearMapPut” action and change its Target parameter to “Reject”.

5. Click the Run button (Play button in toolbar).

6. Note the results in the Target Data Browser.

7. Use the Data Browser to examine the “BadDateRejects.txt” file.

There follows some information taken from reports generated by Repository Manager from the Reject_Connect_Info transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Accounts.txt

Page 67: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 67

SourceOptions

header True

Target (ODBC 3.x)

Database TrainingDB

table tblAccounts

Outputmode Clear File/Table contents and Append

OnReject ClearMapPut Record

target name Reject

record layout R1

buffered false

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.AccountNumber Fields("Account Number")

R1.Name Fields("Name")

R1.Company Fields("Company")

R1.Street Fields("Street")

Page 68: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 68

R1.City Fields("City")

R1.State Fields("State")

R1.Zip Fields("Zip")

R1.Email Fields("Email")

R1.BirthDate Dim A

A = Records("R1").Fields("Birth Date")

If Isdate(A) then

Datevalmask(A, "mm/dd/yyyy")

Else

Logmessage("Warn", "Account Number " & Records("R1").Fields("Account

Number") & _

" has an invalid date: " & Records("R1").Fields("Birth Date"))

Reject()

End If

R1.Favorites Fields("Favorites")

R1.StandardPayment Fields("Standard Payment")

R1.LastPayment Fields("Payments")

R1.Balance Fields("Balance")

Page 69: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 69

Event Handlers & Actions

The event handling capabilities in the Map Designer are designed to allow tremendous flexibility in the handling of data. Actions can be triggered at virtually any point in the Transformation process. Messages can be logged, expressions executed, possible errors can be traced, normal data manipulation and memory clearing can be done, and the Transformation itself can be ended or aborted. You have complete control over when these Actions occur, what Actions occur, and how many Actions occur.

IntegrationArchitect_EventHandlers.ppt

Page 70: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 70

Understanding Event Handlers

Objectives

At the end of this lesson you should be able to define a transformation event. You should understand the relationship between an event and an Event Handler, between an Event Handler and Event Actions and between Event Actions and Event Action Parameters. You should know where to find the events within the Map Designer GUI and how to open the Action List for an event.

Keywords: Event Action, Event Handlers, Event Prece dence, and ClearMapPut Action

Description

Event Concepts

An Event is a “point in time” in the life of a transformation, similar to an event in your lifetime. As in your own life, some events only occur once (e.g., you graduate from high school) while other events occur repeatedly (e.g., you have a birthday). In a transformation, two events are at the start and end of the transformation, each of these events only occurs once. Another event might be when a source record is read, which would probably occur many times. A transformation may thus be thought of as a long sequence of events. Some events occur one time only, others occur many times and still other groups of events may repeat over and over.

As part of your transformation design process, you may choose to perform one or more tasks when one or more of these events occur. Your transformation will use at least one event and that event will perform at least one task. The tasks that events may perform are called “Actions.” There is a wide range of actions available in each event. When you decide to perform an action, you have the ability to control just how that action is performed; these control specifications are called “Action Parameters.”

As a simple example, you might decide to use the event that occurs every time a source record is read (the “AfterEveryRecord” event) and you might decide to perform the action that causes a target record to be written (the “ClearMapPut” action). But you might have multiple target record layouts from which to choose, so you might supply an action parameter for the action to specify the target record layout you want to use.

Using an Event

Your first task will be to find the event you want, and events are grouped in a number of places. First, there are events that apply to the transformation as a whole (e.g., BeforeTransformation), and these can be found in the Transformation and Map Properties dialog. Next, there are source record events that apply to each specific source record type (e.g., AfterEveryRecord) and these can be found in the source hierarchy on the Map Tab under the matching record type heading. Next, there are source record events that apply to each and every source record that is read (e.g., BeforeEveryRecord) and these can be found under the “General Event Handlers” heading in the source hierarchy. Finally, there are two groups of target record events- one group that applies to target records of a specific type and one that applies to each and every target record (no matter what type). These are found in the target hierarchy under headings like those for the source record events.

Once you’ve found the event you want, then you select it to bring up its current list of actions.

The Default Event Handler

As we mentioned above, a transformation must have at least one event, and that event must have at least one action. To ensure that your transformations meet this requirement, the Map Designer will, if

Page 71: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 71

you do not use any events yourself, add one event action. The event that it uses is the AfterEveryRecord event for the source file, and the action that it supplies is the ClearMapPut action for the target file. So, if you do nothing, your transformation will automatically read every source record and, for each one, clear the target buffer, execute all of your mapping expressions and then write the target buffer contents to the target file. This event and its associated action are collectively referred to as the “Default Event Handler.”

When the Map Designer supplies this default event handler, you are informed via an on-screen message box. However, the Map Designer supplies the default event handler ONLY if you do not, yourself, set up and use any event handlers. If you do, then the Map Designer WILL NOT ADD the default event handler to those that you set up. (The Map Designer will, however, warn you when you are about to run a transformation that has no event action that will cause a target record to be written.)

Some Representative Events

Some events are very basic and are used frequently. Most of these events will be discussed and used in the exercises in this course module. You should be aware of these events and when they occur. These events are:

BeforeTransformation

This is the first event that occurs in any transformation, and is very useful for all the housekeeping and set-up tasks that you may wish to perform.

After Transformation

This is the last event that occurs before a transformation ends, and it is very useful for accessing final totals and other values, and performing housekeeping and clean-up tasks.

Specific AfterEveryRecord

The word “specific” refers to an event that is tied to a particular source or target record type. This event occurs whenever a source record of a specific type is read, and is the ideal place to perform the action you want to do using the values from each source record.

Specific AfterFirstRecord

This event only occurs when the first record of a specific type is read, and it is the ideal event in which to perform housekeeping and set-up tasks that relate to a single record type.

General AfterFirstRecord

The word “general” refers to an event that is not tied to a particular source or target record type. This particular event occurs only when the first record is read from the source file and is again a great place to perform general housekeeping and set-up tasks that relate to all record types.

General AfterEveryRecord

This event occurs whenever a source record is read from the source file- no matter what type it may be. It is the best place to put common tasks- those that will apply to all source records.

Some Representative Actions

There are many actions that you can perform whenever a particular event occurs. Some events are used very often and are common to many events. The two most common, and the two that we will use most often in the exercises in this course are:

ClearMapPut

This action does three things. First, it clears the target buffer (for the record type specified in its “Layout” parameter). Next, it executes all the mapping expressions that you have supplied for each

Page 72: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 72

field in the target buffer, in effect filling the target buffer fields with the data you want. Finally, it writes the contents of that buffer to the target file.

Execute

This action executes a script created with the RIFL Script Editor. The scripts you write and execute perform the work of your transformation.

Page 73: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 73

Event Sequence Issues

Objectives

At the end of this lesson you should have a general understanding of the rules governing the sequence in which events occur in a typical transformation. You should also have the tools necessary to investigate and determine the sequence of events in a transformation.

Keywords: Event Precedence, Null Connector, Global variables

Description

There are many Events available to you in a transformation, and using the appropriate one(s) may well depend on the sequence in which they are activated. Although every event may not take place for each Map, there is an “Event Precedence” framework that dictates the sequence based on the Map instructions provided. The Events that are activated will depend on the data in the source file, your own choices and other factors. For a more complete description of the Event Precedence, see the Event Precedence topic in the Help file.

First of all, to derive these general rules, you can perform your own tests. Create a simple transformation that uses the Null source connector and that produces a target file with two fields: the record number from the source and the contents of a global variable. Then, choose the events you are interested in. For each one, set the global variable equal to the name of the event, and then write a target record. When you examine the target file, you will see the order in which the events were activated.

This exercise also uses global variables for the first time.Using the Global Variables option, you can specify scalar variables, internal objects, or ActiveX objects at the Private or Public level in your Transformations. You define global variables in the Map Properties dialog.

Most of our exercises make some attempt to mimic a real world situation in a simplified fashion to get the concept across. This exercise, however, is pure classroom. What we’re doing here is setting up a global variable to hold a value. Then as we enter each event, we’ll use an Execute action to give that variable the name of the event. Then we’ll write a target record. When the Map has run, our target will show the order in which the events fired.

Page 74: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 74

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the Events_SequenceTest transformation in the Solutions folder:

Source(Null)

SourceOptions

Record count 5

Target (ASCII Delimited)

location $(funData)EventNames.txt

Target Options

header True

outputmode Replace

Variables

Name Type Public Value

eventName Variant no ""

BeforeTransformation Execute

expression eventName = "Before Transformation"

BeforeTransformation ClearMapPut Record

Page 75: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 75

target name Target

record layout R1

buffered false

AfterTransformation Execute

expression eventName = "After Transformation"

AfterTransformation ClearMapPut Record

target name Target

record layout R1

buffered false

Source R1 Events

AfterEveryRecord Execute

expression eventName = "R1 AfterEveryRecord"

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

buffered false

Source Events

AfterEveryRecord Execute

expression eventName = "General AfterEveryRecord"

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Page 76: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 76

buffered false

BeforeFirstRecord Execute

expression eventName = "General BeforeFirstRecord"

BeforeFirstRecord ClearMapPut Record

target name Target

record layout R1

buffered false

OnEOF Execute

expression eventName = "General OnEOF"

OnEOF ClearMapPut Record

target name Target

record layout R1

buffered false

Map Schema

Record R1

Name Type Length Description

RecordNumber Text 16

EventName Text 25

Total 41

Map Expressions

R1.RecordNumber Fields("Record Number")

Page 77: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 77

R1.EventName eventName

Page 78: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 78

Using Action Parameters – Conditional Put

Objectives

At the end of this lesson you should be able to open the action list for an event, choose an action and add it to the list, supply mandatory and optional parameters for the action and place the action in the correct sequence within the action list. For those actions that allow it, you should also know how to make the execution of an action “conditional.”

Keywords: AfterEveryRecord event, and ClearMapPut a ction

Description

Once you’ve found the event you want, then you select it to bring up its current list of actions. From this list, you can add an action of whatever type you want. When you add actions, you can either insert them into the current list of actions where you’d like them to be performed, or you can add them to the end of the current list and then move them to the desired place in the sequence.

Once you’ve selected and entered an action, you can set its parameters and control its function. For example, changing the Target Record layout parameter determines what expressions are used and what kind of target record will be written.

Some actions can be performed conditionally, and the ones that can will have Count and Counter Variable parameters. The Count parameter accepts any expression, the result of which must be a numeric value. When this value is zero, the action is not performed; when it is one, the action is performed. When the value is greater than one, the action is performed that many times (with the Counter Variable parameter providing an index).

This exercise will in effect be the same as our exercise that used the Discard function. We will check for bad dates in the source and any that are bad will not go into the target. We will achieve these results in a different way though. We will put our If-Then-Else logic in the count parameter of the ClearMapPut. If we have a good date, we’ll set that parameter to one and fire the action. If it is not a good date, we will increment a variable that is keeping track if the number of bad dates we have and we will set the count parameter to zero and the ClearMapPut will not fire, which means the target record will not be written. At the end of the Map will display a message box to see how many bad dates we had.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the Events_ConditionalPut transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

Page 79: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 79

header True

Target (ODBC 3.x)

Database TrainingDB

table tblAccounts

Outputmode Clear File/Table contents and Append

Source R1 Events

AfterEveryRecord ClearMapPut Record

target

name

Target

record

layout

R1

count Dim A

A = Records("R1").Fields("Birth Date")

' Use flow control to test for a valid date

If Isdate(A) Then

' Enable the Put action by setting to one

1

Else

' Invalid date, log a message

Logmessage("Error", "Account number: " & Records("R1").Fields("Account Number") & _ "

Invalid date: " & Records("R1").Fields("Birth Date")) ' Increment counter

myBadDates = myBadDates + 1

' Suppress the Put action by setting to zero

0

End If

buffered false

Page 80: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 80

Map Expressions

R1.AccountNumber Records("R1").Fields("Account Number")

R1.Name Records("R1").Fields("Name")

R1.Company Records("R1").Fields("Company")

R1.Street Records("R1").Fields("Street")

R1.City Records("R1").Fields("City")

R1.State Records("R1").Fields("State")

R1.Zip Records("R1").Fields("Zip")

R1.Email Records("R1").Fields("Email")

R1.BirthDate datevalmask(Records("R1").Fields("Birth Date"),"mm/dd/yyyy")

R1.Favorites Records("R1").Fields("Favorites")

R1.StandardPayment Records("R1").Fields("Standard Payment")

R1.LastPayment Records("R1").Fields("Payments")

R1.Balance Records("R1").Fields("Balance")

Page 81: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 81

Using “OnDataChange” Events

Objectives

At the end of this lesson you should be able to use an OnDataChange event to execute certain actions whenever the value of a field by which the input file is in order changes. You should also be able to manipulate the first and last data change events to achieve your desired results.

Keywords: OnDataChange, and Record Type Event Handl ers

Description

When source files are sorted by one or more data items, Map Designer gives you the ability to “watch” the value of one of those sort keys as source records are put into the source buffer and, when the value changes from one record to the next, take whatever actions you wish. Processing of this type is very common; three situations in which it is often used are (1) to produce summary information in the target file, (2) to optimize transformations performing lookups and (3) when the target is hierarchical.

First, you specify the source field by which the source file is sorted. The transformation will then monitor the position that field occupies in the source buffer. Whenever a new source record is placed in the buffer, the transformation will compare the value of that field in the new record (the one just placed in the buffer) to the value that field had in the previous record. When the values are different, the OnDataChange event is activated and, like any other event, its Event Handler will execute the list of actions you have specified.

There are two special situations that are also available to you. When the very first record is placed in the buffer the value of the field being monitored will be different than what had been in the buffer previously (the source buffer always starts filled with null values), and so the OnDataChange event will be activated. Similarly, when the source buffer is cleared after the last record has been processed, the value of the field being monitored will change from some real value to null values, and again the OnDataChange event will be activated. But since these situations may or may not be useful in any given transformation, you have the option of using one or the other of them, or both of them or neither of them. This is controlled in the “Data Change Event Management Options”.

For all of this to work the source file must be (or at least should be) in order by the value(s) being monitored. If it is not, you can either (1) physically sort it prior to its input into the transformation or (2) allow the transformation to dynamically sort it. For flat files, using the Source Keys and Sorting dialog will perform this dynamic sort. For an SQL source, you can also use the “Order By” clause in your SQL query.

You can monitor up to five different data items in a single transformation. And not only do you have an OnDataChange event for each one, but there is also an event that is activated whenever any monitored field changes and an event that is only activated when all monitored fields change at the same time.

It is true that sorting the data at the beginning of your transformation increases execution time, but the reductions in execution time that are possible with the OnDataChange strategy will usually far outweigh the overhead of the sort itself. This is particularly true when IO-intensive operations, such as lookups, are involved.

This exercise builds a map that sorts our Accounts.txt file by state. Our target file will have one record for every state in the source file. Each record will have three fields, the state, the number of accounts in that state, and the total balance of all records in that state.

Page 82: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 82

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the Events_OnDataChange transformation in the Solutions folder:

Variables

Name Type Public Value

varState Variant no ""

varCounter Variant no 0

varBalance Variant no 0

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Sort Fields

Fields("State") type=Text, ascending=yes, length=2

Target (Excel 97)

Location $(funData)AccountSummariesbyState.xls

table Sheet1

TargetOptions

Page 83: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 83

Header Record Row 1

Outputmode Replace

Source R1 Events

AfterEveryRecord Execute

expression ' Set the state value for the current record because it will be different "OnDataChange"

varState = Records("R1").Fields("State")

' Increment the counter for the number or records within this block

varCounter = varCounter + 1

' Accumlate the balance for the records within this block

varBalance = varBalance + Records("R1").Fields("Balance")

OnDataChange1 ClearMapPut Record

target name Target

record layout R1

buffered false

OnDataChange1 Execute

expression ' Reset these vars for next block of records

varCounter = 0

varBalance = 0

Data Change Monitors

recordlayout name R1

Records("R1").Fields("State") sequence=0, trigger=1

Data Change Event Management Options

Setting Suppress first ODC event/Fire extra ODC event at EOF

Page 84: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 84

Record R1

Name Type Length Description

State Text 16

Number_of_Accounts Text 16

Total_Balance_of_Accounts Text 16

Total 48

Map Expressions

R1.State varState

R1.Number_of_Accounts varCounter

R1.Total_Balance_of_Accounts varBalance

Page 85: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 85

Trapping Processing Errors With Events

Objectives

At the end of this lesson you should be able to use the OnError event handler to trap and handle processing errors yourself, including using file management functions to record information in a file independent of the Source, Target or Reject connections.

Keywords: Error Message Reference Chart, Error and Event Preferences, Chr, FileAppend, and File Functions

Description

Some errors can be “trapped” from within a script and “handled” there so that the transformation can continue (see “Error Handling Statements” in the Help File). There are other kinds of errors that cannot be dealt with that way. For example, the data truncation error is not identified within a script, but rather within the transformation’s output routines when a value too big for the target field is encountered.

To handle other kinds of errors, we need to be aware of the types of errors that can exist, the options we have in dealing with them, the list of specific error codes that we may wish to deal with and then some strategies for dealing with the records that may cause these errors.

In general, there are three types of errors that we may be concerned with. “Fatal” errors are those that will cause a map to stop. “General” errors are those that are not fatal but may affect the transformation process. An example might be a read error for a specific source record. “Warnings” are not necessarily errors, and include data truncation, field name changes, loss of precision, and so on. In the Error Logging Preferences, we can set these errors (and other messages) to be logged or not, and we have some control over when the transformation stops.

In other cases, we may wish to deal with certain errors, and it would be useful to know what all the errors are and how they can be identified. There are too many to list here, but you can find a complete list of the errors and error codes in the Help System in the series of pages entitled “errors”.

To give you flexibility in handling these errors, there are a number of individual events that are tied to specific errors; when that error occurs, the Map Design checks the matching event handler to see whether you have supplied actions to be performed. If so, they are executed. If not, then whatever would have happened as regards that error (e.g., the transformation aborts) will happen.

So, when we wish, we can use the specialized error event handlers, such as OnTruncateError. The transformation will automatically take care of identifying the error and, if we have utilized the appropriate event handler, transferring control to that event handler (instead of aborting the transformation). In the event handler, we can perform any tasks we wish to deal with the error that has occurred. Once we have done so, then we can either allow the transformation to terminate or, by using the Resume action, cause the transformation to pick up where it left off.

In this exercise, our source will be the accounts.txt file. Our target will be a file that will show how many months each customer will take to pay off their balance if they continue to pay at the same amount as they did previously. We’ll derive that number by dividing the “Balance” field by the “Payments” field. Though, there will be a problem if we have a new customer who has a balance, but has not yet made a payment, or any customer that, for whatever reason, did not make a payment last month. The problem will occur when we try to divide by that “zero” balance. At that time, Map Designer will throw an error. We will catch that error using the error handling event handlers.

Page 86: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 86

Another interesting thing about this exercise, is that we will write a file that is not our target that will contain the values of the payments and balances that cause the error. We’ll do this with the FileAppend functionality. We’ll also see other file manipulation functions.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

3. Also observe the “DividebyZero.txt” file that was created in the Data folder.

There follows some information taken from reports generated by Repository Manager from the ErrorHandling_OnError_Event transformation in the Solutions folder:

Variables

Name Type Public Value

flagFirstTime Variant no 0

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ASCII (Delimited))

location $(funData)PaymentsRemaining.txt

TargetOptions

header True

Outputmode Replace

Page 87: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 87

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Source Events

BeforeFirstRecord Execute

expression Dim A

A = MacroExpand("$(funData)")

If FileExists(A & "DivideByZero.txt") Then

FileDelete(A & "DivideByZero.txt")

End If

/* This example shows the functionality of the MacroExpand. FileExists and FileDelete

functions, though similar results

could be had by using :

FileWrite("$(Data)DivideByZero.txt", "AcctNumber" & sep & "Payt" & sep & "Bal" & crlf)

This would replace any existing file with a file that contains only the header.

This would also make the flagFirstTime variable unnecessary.

*/

Target Events

OnError Execute

expression Dim sep, crlf

sep = "|"

crlf = Chr(13) & Chr(10)

If flagFirstTime = 0 Then

FileAppend("$(funData)DivideByZero.txt", "AcctNumber" & sep & "Payt" & sep & "Bal"

& crlf)

' set flag to 1 so header will not be written next time

flagFirstTime = 1

End If

Page 88: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 88

FileAppend("$(funData)DivideByZero.txt", Records("R1").Fields("Account Number") &

sep & _

Records("R1").Fields("Payments") & sep & _

Records("R1").Fields("Balance") & crlf)

OnError Resume

Map Schema

Record R1

Name Type Length Description

Account Number Text 9

Payments Text 7

Balance Text 6

MonthsToGo Text 16

Total 38

Map Expressions

R1.Account Number Records("R1").Fields("Account Number")

R1.Payments Records("R1").Fields("Payments")

R1.Balance Records("R1").Fields("Balance")

R1.MonthsToGo Dim A, B

A = Records("R1").Fields("Payments")

B = Records("R1").Fields("Balance")

If Int(B/A) == B/A then

B/A

Else

Int(B/A) + 1

End if

Page 89: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 89

Comprehensive Review

Put together everything you have learned so far.

Workshop Exercise

To test our knowledge and review the introductory module for the Cosmos Integration Essentials courses we want to design a Map to load the Accounts.txt file into a target database.

Basic Map specifications:

Source Connector: ASCII (Delimited)

Source File: Accounts.txt

Header property: True

Target Connector: ODBC 3.x

Data Name Source: TRAININGDB

Table: tblIllini

Output Mode: Replace Table

Exercise

1. Map the four target fields with appropriate data from the source.

2. Reject all records from the state of Illinois into an ASCII Delimited file called Reject_Accounts.txt.

3. Use an appropriate Date/Time function to convert the formatted date strings into a real date-time data type.

4. Test for invalid dates using the “IsDate” function and reject those records as well.

5. Aggregate the Balances from all rejected records using a global variable.

6. Report the aggregated balance in the log file using the “LogMessage” function.

There is a map that does all of this in the Solutions folder. It’s called Comprehensive_Review1. Open it and look only if you get stuck. It should be noted that the solution map shows only one way to do this. There are several.

Page 90: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 90

Metadata – Using the Schema Designers

Page 91: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 91

Structured Schema Designer

The Structured Schema Designer provides a visual user interface for designing structural data files. The resulting metadata is stored as Structured Schema files with an “.ss.xml” extension. The “.ss.xml” files include schema, record recognition rule and record validation rule information.

In the Structured Schema Designer, you can create or modify schemas that can be accessed in the Map Designer to provide structure for Source or Target files.

You can also use the Data Parser to manually parse Binary, fixed-length ASCII, or any other files that do not have internal metadata. The Data Parser defines Source record length, defines Source field sizes and data types, defines Source data properties, assigns Source field names, and defines Schemas with multiple record types.

IntegrationArchitect_StructuredSchemaDesigner.ppt

Page 92: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 92

No Metadata Available (ASCII Fixed)

Objectives

At the end of this lesson you should be able to define and create a structured schema.

Keywords: Data Parser, Modify a Schema, and Hex Bro wser

Description

The first step will be to tell the Structured Schema what type of file is being defined, so pick the appropriate connector first.

You can change the name of the default record type from R1 to something more meaningful for your own task. This is done by choosing “Record Types” in the hierarchy and overtyping the existing default name.

To enter the field information for your record layout, select its “Fields” entry in the navigation tree and enter the first field name. Continually tab through the grid to enter a description, if desired, select a data type from the dropdown and enter the field length.

This method assumes you have documentation that describes the structure of the file.

If you do not already know the structure of the file, you can use the visual parser to make educated guesses until the file is parsed correctly.

When you’re done, you can browse the data to ensure that your definitions were accurate and then save the structured schema using a name of your choosing.

Exercise

1. Start a New Structured Schema design and choose the ASCII Fixed connector

2. Click the Visual Parser toolbar button (Red Knife)

3. Navigate to the file named “Payments.txt”

4. Click in the current row (blue highlight) between the fields and name the fields by overtyping in the “Field Name” drop down list:

Record Layouts

Record R1

Name Type Length

AccountNumber Text 9

PaymentDate Text 8

Amount Text 10

Total 27

Page 93: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 93

External Metadata (Cobol Copybook)

Objectives

At the end of this lesson you should be able to use a Cobol Copybook to create a new schema.

Keywords: Copybook

Description

You can quickly import the structure from an external definition file.

If you do this from a Map Design session, it will not create a Structured Schema for reuse later.

If you do this from a Structured Schema Design session, you will have the “ss.xml” file for reuse.

If you decided to use a Cobol Copybook to create a new schema, you will define this schema in the Enter External Connection Info Window.

External Connector Section

This displays the connector you choose from the Connection pull down. You cannot change the connector from this pull down but must make the change in the toolbar options pull down, Connections.

Layout/Record Name

When you choose the External File to duplicate, the data will populate the Layout/Record Name section. You will need to select layouts required for the schema by clicking each item in the Add to Layouts column.

There are two buttons to aid in selecting Layout/Record items:

Select all

Click Select all to choose all of the Layout/Record items.

Unselect all

If you need to make a change to the Layout/Record Name section, Click Unselect all to start over.

Exercise

1. Start a New Structured Schema design session and choose the Binary connector.

2. Using the drop-down menu in the upper right hand of the window, choose Cobol 01.

3. Navigate to the file named “Accounts_Cobol.cbk”.

4. Click the Layout/Record Name(s) you want to import.

5. Click OK.

6. Review the structure in the grid view.

7. Save the Structured Schema as “CobolCopyBook_Accounts.ss.xml”.

Record Layouts

Record ACCOUNT_INFO

Name Type Length

Page 94: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 94

ACCTNUM Display 9

NAME Display 21

COMPANY Display 31

STREET Display 35

CITY Display 16

STATE Display 2

POSTCODE Display 10

EMAIL Display 25

BIRTHDATE Display 10

FAVORITES Display 11

STDPAYT Display sign leading 6

LASTPAYT Display sign leading 6

BALANCE Display sign leading 6

Page 95: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 95

Binary Data and Code Pages

Objectives

At the end of this lesson you should be able to create a Structured Schema for a Binary, EBCDIC File.

Keywords: Binary, EBCDIC

Description

Creating a Structured Schema for a Binary, EBCDIC File

When working with binary files, we will usually need to tell the Structured Schema Designer that the file should be displayed and accessed using a coding structure other than ANSI. The most common binary coding structure is EBCDIC. To change this property, we will work with the SSD connection specification and specifically its Property Sheet. We can change the “Code Page” property to match the coding structure of the file we are working with.

Another issue with binary files is that the records are often some arbitrary length (e.g., 500 bytes) even though the logical records might be longer or shorter than that. As a result, when the data is displayed in the Visual Parser, it does not appear as if the data is structured. There is no automatic solution to this problem, but you can adjust the record length that the Visual Parser will use until you see the data lining up properly. Then you can parse normally, and the SSD will “remember” the record length you have set, and break the file apart properly when you use the schema in a Map Design.

Exercise

1. Start a New Structured Schema Design

2. Click the Visual Parser button (red knife)

3. Change the Code Page property to 37 US EBCDIC (click the Apply button!)

4. Navigate to the file named “Accounts_Binary.bin”

5. Determine the record length by looking for patterns in the file

6. Overtype the Length and hit Enter key (try 180, what happens?)

7. After you have the columns lined up, parse the fields, select data types and field properties until you have defined the structure.

8. Save the Structured Schema as “BinaryDataCodePages.ss.xml” for reuse

Record Layouts

Record R1

Name Type Length

AccountNumber Text 9

Name Text 21

Page 96: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 96

Company Text 31

Address Text 35

City Text 16

State Text 2

ZipCode Text 10

Email Text 25

BirthDate Date 4

Favorites Text 11

StandardPayment Packed decimal 6

Payments Packed decimal 7

Balance Packed decimal 6

Total 183

Page 97: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 97

Reuse Metadata (Reusing a Structured Schema)

Objectives

At the end of this lesson you will know the steps involved in applying a pre-developed Structured Schema to a new file that is supposed to follow the structure defined in that schema.

Keywords: Structured Schema

Description

This example transforms a Binary file into an ASCII Delimited file.

When you activate the Structured Schema Designer from the Source Tab or Target Tab, and have saved the schema, it is automatically attached to the current Transformation. If you wish to use a pre-defined schema, both the Source and Target Tabs have a dropdown from which an existing schema can be selected.

As soon as the schema is attached, the Source or Target information (hierarchy and field list) will be filled in on the Map Tab. You may change field names, lengths and data types, but only if you first “unlock” the schema.

Exercise

1. Start a New Map design session and choose the Binary connector

2. Select the Structured Schema named “BinaryDataCodePages.ss.xml”

3. Select the file named “Accounts_Binary.bin”

4. Change the source property Code Page to 37 US EBCDIC (click APPLY button!)

5. Browse the file to confirm the structure has been applied

6. If desired, you can finish the map based on the specifications below. The point of the lesson, though, is that a file can be parsed and the structure used as input for Map Designer. In this exercise we use it as a source connection. Structured Schemas can also be used as part of a target connection.

There follows some information taken from reports generated by Repository Manager from the Reusing_Structured_Schema transformation in the Solutions folder:

Source (Binary)

location $(funData)Accounts_Binary.bin

SourceOptions

codepage 0037 US (EBCDIC)

Page 98: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 98

Structured Schema

BinaryDataCodePages.ss.xml

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Target (ASCII (Delimited))

location $(funData)AccountsOut.txt

TargetOptions

header True

Outputmode Clear File/Table contents and Append

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.AccountNumber Records("R1").Fields("AccountNumber")

R1.Name Records("R1").Fields("Name")

R1.Company Records("R1").Fields("Company")

R1.Address Records("R1").Fields("Address")

Page 99: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 99

R1.City Records("R1").Fields("City")

R1.State Records("R1").Fields("State")

R1.ZipCode Records("R1").Fields("ZipCode")

R1.Email Records("R1").Fields("Email")

R1.BirthDate Records("R1").Fields("BirthDate")

R1.Favorites Records("R1").Fields("Favorites")

R1.StandardPayment Records("R1").Fields("StandardPayment")

R1.Payments Records("R1").Fields("Payments")

R1.Balance Records("R1").Fields("Balance")

Page 100: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 100

Multiple Record Type Support in Structured Schema D esigner

Objectives

At the end of this lesson you should be able to discuss the differences between files that have multiple record types and those that don’t. You should be able to describe the tasks that will have to be performed to work with source files that have multiple record types. You should also be able to describe the actions you will have to take should you wish to create a target file with multiple record types.

Keywords: Record Types, Record Layouts, Discriminat or, and Recognition Rules

Description

Files can be grouped into two main classifications relative to the records they contain. The first classification is comprised of those files all of whose records are of the same type. This means that each record will contain the same fields, in the same order and with the same properties. The second classification is comprised of those files that contain records that have different formats. One record might contain ten fields while another might contain only six or perhaps twelve. One record type might describe a Customer while another describes a payment he made on his account. Certainly these two records would be different.

The critical issue for record type files is not the definition of the records themselves. These can be defined in the Structured Schema Designer with the Visual Parser (by parsing one of them, adding another, parsing it, adding another, and so on). They can also be defined using the grid interface within the SSD (where you simply enter record type names and then enter the field lists for each). You might also be able to import the record layouts, perhaps from a COBOL copybook or some other readable file.

The critical issue is how the Map Designer will be able to distinguish one record from another. For any application to be able to work with a file of this type, there must be some way to tell the records apart. There should be one common field in each record type, the value of which must identify the record type itself. If this were not true, no software application would be able to deal with the file- Map Designer included. This field is called the “discriminator” field as it enables us to discriminate between record types.

Once the discriminator field has been identified, the remaining task is to define the values that it can have and associate these values with individual record types. For example, if the value of the field were “CUS,” we might know we have a Customer record type. Or if the value of the field were “PAY,” we might know we are dealing with a record that describes a payment on an account. These types of rules are called “recognition rules,” and we must define at least one such rule for each record type. Rules might not be so simple, but fortunately the Structured Schema Designer can work with very complex ones.

To create a structured schema for a source file that contains multiple record types, there are three possible strategies you can follow. Which strategy you choose depends on what information you already have available describing the file. The three strategies are:

You have record layout definitions available in a file:

Import the record layout definition file into the SSD

Use the “ALL Record Type Rules”>”Recognition” dialog to define at least one rule for each record type

Page 101: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 101

You have record layout definitions available in a printed document:

Select the connector type in the SSD

Use the Grid layout to define each record type and its fields

Use the “ALL Record Type Rules”>”Recognition” dialog to define at least one rule for each record type

You have no definitions available- only the data file:

Activate the SSD Visual Parser for your file

Name and parse each record type

Find and select the discriminator field

Use the “Recognition Rules” button to activate the Recognition Rules dialog and define at least one rule for each record type

The common element to these strategies is the definition of the “recognition rules.” These are defined in the “Recognition Rules” dialog, which is activated from either the “ALL Record Type Rules”>”Recognition” hierarchy item or the individual “R1 Rules”>”R1 Recognition” items on the grid layout in the SSD.

First, you’ll identify the discriminator- the field whose contents will be used to tell the record types apart. Next, you can use the Generate Rules button to automatically generate some skeleton rules for each record type. Finally, you can add the actual value that the discriminator field will contain for each record type (and adjust other properties of the rules as you wish). When you’re done, the structured schema for the file can be saved.

Scenario

We’ve been given a source file (Payments_MultiRecType.txt) that contains multiple record types but we have not been given any information about the file, its records or its fields. We do know that there are payment records and a total record, and that the payment records are supposed to contain an account number, payment date and payment amount. The total record is supposed to contain a file date and file total, but we don’t know where in the record each field is. We need to define a structured schema for this file.

Exercise

1. Begin a new Map Design.

2. Point the source to the ASCII Fixed file Payments_MultiRecType.txt.

3. Browse the source file and determine whether record types exist. Close the browser.

4. Click the “Build Schema...” button for the Structured Schema.

5. Click the “Parse Data” button.

6. Rename the Record to Payment and parse a payment record.

Record Payment

Name Type Length

RecordIndicator Text 1

Page 102: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 102

AccountNumber Text 9

PaymentDate Text 8

Amount Text 11

Total 29

7. Click the “Add Record” button and add the CheckSum record type.

8. Scroll down until you find the next different structured record (row 30).

9. Parse that record type with its fields.

Record CheckSum

Name Type Length

RecordIndicator Text 1

EmptiedDate Text 8

Action Text 3

TotalAmount Text 9

PaymentCount Text 4

ClerkID Text 4

Total 64

10. Select the Payment record from the Record dropdown and ensure that the RecordIndicator field is displayed in the “Field Name” box.

11. Check the Discriminator check box.

12. Click the “Recognition Rules...” button.

13. Click the “Generate Rules” button.

14. Define PaymentRule1 to be that the discriminator field equals “P” (Note that quotation marks are not used in the Value box).

15. Define CheckSumRule1 to be that the discriminator field must be equal to “E”.

16. Return to the Structured Schema Designer dialog.

17. Save the structured schema as Payments_MultiRecType.ss.xml.

18. Close the Structured Schema Designer.

19. Browse the source file again and note how the structured schema information has been applied to it. Look at both kinds of records and see how the browser changes.

Page 103: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 103

Conflict Resolution

Objectives

At the end of this lesson you should be able to use a Structured Schema to set up a Map that uses one source record type to verify the data in the other record type.

Keywords: Schema Mismatch Handling, Record Specific Event Handlers, and Validation

Description

Our newly defined payment file structure allows us more robust data validation opportunities as we load the Payments table because we have some checksum values on which we can evaluate data. We are not going to use the additional “Clerk” fields in our Payments table but we will take the opportunity to refine the Payments table structure and modify our Transformation Map that loads it.

The additional record layout in our payments file has data that allows us to evaluate aggregated data with checksum values. We can make use of the record specific Event Handlers to perform the evaluations at the appropriate time.

Default Event Actions - Multiple Record Types

The Map Designer no longer sets the default Event Handler for you. Once you have specified any other event actions OR you have a map with multiple record types, you must define the actions yourself.

Exercise

1. Build this map based on the specifications in the reports below.

There follows some information taken from reports generated by Repository Manager from the Multi_Rec_Payment_Validation transformation in the Solutions folder:

Source (ASCII (Fixed))

location $(funData)Payments_MultiRecType.txt

Structured Schema

Payments_MultiRecType.ss.xml

Target (ODBC 3.x)

Database TrainingDB

Page 104: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 104

table tblPaymentsVerified

Outputmode Clear File/Table contents and Append

Variables

Name Type Public Value

paymentCounter Variant no

paymentSubtotal Variant no

Payment Events

AfterEveryRecord Execute

expression paymentSubtotal = paymentSubtotal + Records("Payment").Fields("Amount")

paymentCounter = paymentCounter + 1

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

buffered false

CheckSum Events

AfterEveryRecord Execute

expression 'This code can be imported by the menu, File > Open Script File > ChecksumTest.rifl

' declare temp variables used for better readability

Dim CRLF, realTotal, realCount

CRLF = Chr(13)&Chr(10)

realTotal = Records("CheckSum").Fields("TotalAmount")

Page 105: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 105

realCount = Records("CheckSum").Fields("PaymentCount")

' display current count and payment sub-total for each clerk

MsgBox("---New Checksum---" & CRLF & _

"PaymentCounter= " & paymentCounter & " : Should be = " & realCount & CRLF & _

"Paymt Amt= " & paymentSubtotal & " : Should be = " & realTotal)

' evaluate count and sub-total for inconsistencies

If paymentSubtotal <> Trim(realTotal) Then

MsgBox("Total payment amount for this clerk does not match checksum amount!!!",

48)

End If

If paymentCounter <> Trim(realCount) Then

MsgBox("Payment Count for this clerk does not match checksum amount!!!", 48)

End If

' reset global variables for next clerk

paymentCounter = 0

paymentSubtotal = 0

Map Expressions

R1.AccountNumber Records("Payment").Fields("AccountNumber")

R1.PaymentDate Datevalmask(Trim(Records("Payment").Fields("PaymentDate")),

"mmddyyyy")

R1.PaymentAmount Records("Payment").Fields("Amount") / 100

Page 106: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 106

Extract Schema Designer

The Extract Schema Designer has the ability to read complex text files of many kinds. The amount of computer data is exploding all around us, and much of it is provided in raw text formats. Some examples of the many sources handled by the Extract Schema Designer follow:

Printouts from programs captured as disk files

Reports of any size or dimension

ASCII or any type of EBCDIC text files

Spooled print files

Fixed length sequential files

Complex multi-line files

Downloaded text files (e.g., news retrieval, financial, real estate...)

HTML and other structured documents

Internet text downloads

E-mail header and body

On-line textual databases

CD-ROM textbases

Files with tagged data fields

Extract Schema Designer does NOT use the XML repository that all of our other Design Tools use. Extract Schema Designer saves Extracts in two ways. The first is in a script file in Content Extractor Language with a “.cxl” extension. This file is only useful as part of a Source Connection in Map Designer. It cannot be imported into Extract Schema Designer to be edited. The second way that an Extract is saved is in an Access Database. The default path and filename for this database is C:\Program Files\Pervasive\Cosmos\Common800\Extractor800.mdb . Extracts stored here can be reopened and edited.

Content Extractor Language is very rich and expressive, and provides many advanced data manipulation and formatting capabilities. CXL can be used to create or customize complex scripts necessary for text files whose patterns and rules may be beyond the functionality of the user interface supplied with the Extract Schema Designer. More information about this language is available in the Content Extraction Language Help file under the SDK Help Files.

Former users of Data Junction Content Extractor should be aware that the script files are no longer called “DJP” files. They are known as “CXL” files now.

Also, there are several names that may be used in place of the default connector name of Extract Schema Designer’s Connector. This list includes: Cambio, Content Extractor, Extractor, and Report Reader.

There are also two connectors that have a pre-designed script included with the software that parse statistical information from the log file automatically. These are Data Junction Log File and Integration Log File.

Page 107: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 107

IntegrationArchitect_ExtractSchemaDesigner.ppt

Page 108: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 108

Interface Fundamentals & CXL

Keywords: Extract Schema Designer Mechanics: Line Styles, Fields, Accept Record, Automatic Parsing

Description

The first file that we will be parsing is Purchases_Phone.txt. We should take a look at it first in a text viewer. Although it might be possible to use this report file as a direct input for a transformation, we would have to define it as a multiple-record-type file. With so many record types and so much processing involved with them, writing the transformation would be time consuming. So what we plan to do is use the Extract Schema Designer to create an extract specification that will transform the report file into a more familiar row/column format, and then use that formatted data as input to the transformation that adds these purchases to the database table. We don’t even have to have a two-step procedure or read the report file twice. Once the extract schema is defined, we can create a transformation, specify the report file as the Source, and apply the Extract Schema to it. The file will then be presented to the transformation in simple rows and columns- complete with headers.

Exercise

Start Extract Schema Designer.

1. From the Repository Explorer, select New Object >Extract Schema.

2. At the prompt, navigate to the file you will be working with, in this case, Purchases_Phone.txt.

3. Choose OK to accept the Source Options defaults.

4. Highlight the word “Category” on one of the Category lines and right-click in the highlight.

5. Select Define Line Style>New Line Style.

6. Verify that all defaults are acceptable and click Add. We’ve now defined a Line Style for the Category field.

7. Highlight the Category code on one of the Category lines and right-click in the highlight.

8. Select Define Data Field>New Data Field.

9. Change the field name to Category.

10. Verify that all other defaults are acceptable and click Add. We’ve now defined the Category Data Field.

11. Highlight a ProductNumber and the rest of the spaces on the line and right-click in the highlight.

12. Select Define Data Field>New Data Field.

13. Change the field name to ProductNumber.

14. Verify that all other defaults are acceptable and click Add.

15. Highlight a Quantity and all but one of the spaces between the actual digits of the Quantity and the colon following the literal “Quantity” (if any).

Page 109: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 109

16. Right-click in the highlight and select Define Data Field>New Data Field.

17. Change the field name to Quantity.

18. Verify that all other defaults are acceptable and click Add.

Now let’s ensure that Source Options will allow parsing:

20. Select Source>Options from the Menu bar.

21. On the Extract Design Choices tab, look in the Tag Separator dropdown to see if there is a character sequence that matches the sequences used in your data to separate Line Style “tags” from actual data. If there is, select it. If there is not, then automatic parsing is not available. Also on this tab, ensure that the Trim Leading and Trailing Spaces checkbox is selected.

22. On the Display Choices tab, ensure that the Pad Lines checkbox is selected.

23. Click OK to accept the selections.

Now let’s define the UnitCost Line Style and Data Field simultaneously.

24. Highlight an entire UnitCost line in the data and right-click in the highlight.

25. Select Define Data Field>Parse Tagged Data.

NOTE: When Line Styles and Fields are defined in this way, the default name for the Field is exactly the same as that for the Line Style, so no change to the field name is usually necessary. If a change is desired, however, point your cursor to the actual field data in the display and double-click on the data. This will bring up the Field Definition dialog box and you can change the name (or other characteristics) here.

Now we’ll define the TotalCost and ShipmentMethodCode Line Styles and Data Fields simultaneously.

26. Highlight an entire TotalCost line and ShipmentMethodCode line in the data.

27. Right-click in the highlight and select Define Data Field>Parse Tagged Data.

The next thing is to define the Line Style that determines the end of a row of data for the Extract File.

28. Locate the Line Style that contains the Field that will be the last column in each row in the eventual extract file (in this case, ShipmentMethodCode).

29. Double-click on the Line Style name to bring up the Line Style Definition dialog.

30. On the Line Action tab, choose ACCEPT Record, and accept the remaining defaults.

31. Click Update.

Test the Extract to ensure that your definitions are correct.

32. Click on the Browse Data Record button.

33. Choose OK to allow assignment of all Fields to the Extract File.

34. Examine the data to ensure that your Field definitions are correct.

35. Close the browser window.

36. Use the “Parse Tagged Data” functionality to define the Account Number, Purchase Order Number and PODate fields.

Page 110: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 110

37. Double-click on a Purchase Order Number to access the Field Definition dialog.

Note: The options at this tab determine how the Extract Schema Designer will process the data in this particular field from record to record. The use of these options makes a distinction between the data fields and the contents of those fields. When the Extract Schema Designer is collecting data fields, it collects all the fields that have been defined on lines of text whose line action is either COLLECT Fields or ACCEPT Record and assembles those fields into a data record. The options at this tab determine how data within a data field is handled.

38. On the Data Collection/Output tab, ensure that Propagate Field Contents has been selected.

39. Double-click on a PODate to access the Field Definition dialog.

40. On the Data Collection/Output tab, select Flush Field Contents.

41. Click Update.

42. Click on the Browse Data Record button.

43. Choose OK to allow assignment of all Fields to the Extract File.

44. Examine the data to see the effect of “Propagate” versus “Flush”.

45. Close the browser window.

46. Redefine the PODate field to propagate it as well.

47. Browse the data record again to ensure the data is being propagated.

NOTE: In this case we do want the data to propagate, but you will need to decide which behavior you want for any situation.

We can specify an order for the columns in your Extract File rows (if desired).

48. Choose Field>Export Field Layout from the menu bar.

49. To reposition a column, left-click and drag a column name up or down in the list, dropping it on top of another column name.

Page 111: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 111

NOTE: When you drag “upward,” the column you are dragging will be placed before the column on which you drop it. When you drag “downward,” the column you are dragging will be placed after the column on which you drop it.

50. Put the six columns in the order they appear in the source file.

51. Click OK. 52. Exclude columns from the Extract File rows (if desired). 53. Select Record>Edit Accept Record from the menu bar. 54. Clear the check boxes for the columns that you do not wish to appear in the Extract File. 55. Click Update.

Save the Extract Schema Definition:

If the Extract Schema Definition has already been saved before, click the Save Extract button to save it again under the same name. You may also choose File>Save Extract to perform the same function.

If the Extract Schema Definition has not yet been saved, click the Save Extract button. In the “Save” dialog, supply the name PhonePurchases.cxl and verify the location where the Definition will be stored (changing it if necessary). You may also choose File>Save Extract to perform the same function.

If the Extract Schema Definition has been saved before, but you have modified it and want to save it as a different Definition, then choose File>Save Extract As . In the “Save” dialog, supply a name for the Definition and verify or supply the save location.

Close the Extract Schema Designer

56. Open Map Designer and establish a source connection based on the information below.

57. Open the Source Data Browser and note the results. Note that this source could now be used in the same way that any other source would be in a transformation.

58. Close Map Designer without saving.

Source (Extract Schema Designer's Connector)

location $(funData)Purchases_Phone.txt

Schema File

programfile C:\Cosmos_Work\Fundamentals\Solutions\

Extract_Schema_Designer\InterfaceFundamentals.cxl

Page 112: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 112

Data Collection/Output Options

Keywords: Data output properties: Flush or Propagate field contents

Description

Most of the time, all you will want to extract from files such as our report file is the actual data that describes the business objects- in this case, the purchases. But sometimes, there will be other information in the file that you would also like to capture. For example there may be “header” or “footer” information that you would like to have available in the transformation. With the Extract Schema Designer we can define header and/or footer information, and add it, as additional columns, to the row/column file specification.

Exercise

1. From the Repository Explorer, select New Object>Extract Schema.

2. At the file selection prompt, click Cancel.

3. Double-click on the Purchases_Phone.cxl script to open it.

4. Choose File>Save Extract As and save the extract again as Purchases_Phone2.cxl.

5. Highlight the first slash in the ReportDate.

6. Right-click in the highlight and select Define Line Style>New Line Style.

7. Change the proposed name to ReportDate.

8. Choose Add.

9. Highlight the second slash in the ReportDate.

10. Right-click in the highlight and choose Define Line Style>Append Line Pattern.

11. Double-click on the ReportDate line style name to view the results.

NOTE: This Line Style definition will be sufficient so long as there is no other line of information in the file that has slashes in positions 24 and 27 and which does not contain a Report Date. If there were, we could use the same procedure to add the spaces in front of and after the actual date. If that were still not sufficient, then we could use additional techniques that we will learn in later exercises to make the Line Style definition a unique one.

59. Highlight the Report Date.

12. Right-click in the highlight and select Define Data Field>New Data Field.

13. In the Field Definition dialog, change the name of the Field to ReportDate.

14. Click Add.

15. Use the Browse Data Record button to view the results.

16. Highlight the entire Order File Creator text line at the bottom of the file.

17. Right-click in the highlight and select Define Data Field>Parse Tagged Data.

Page 113: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 113

18. Double-click on the Order_File_Creator Line Style to change its name (if desired).

19. Double-click on the actual email address to open the Field Definition dialog.

20. Change the Field Name to OrderFileCreatorEmailAddress.

21. Click Update.

22. Use the Browse Data Record button to view the results.

23. Close the browser then Double-click on the Order_File_Creator Line Style name to open the Line Style Definition dialog.

24. On the Line Action tab, change the action to ACCEPT Record.

25. Click Update.

26. Choose Record>Edit Accept Record from the menu bar.

27. Choose Order_File_Creator for the Current Accept Record.

28. Select the OrderFileCreatorEmailAddress checkbox.

29. Choose ShipmentMethodCode for the Current Accept Record.

30. De-select the OrderFileCreatorEmailAddress checkbox.

31. Click Update.

32. Use the Browse Data Record button to view the results.

33. Save the Extract Schema Design as Purchases_Phone2.cxl and close the Extract Schema Designer.

NOTE: When an Extract Schema Design like this one is used as part of the Source specification for a transformation, the transformation Map tab will look as if the input file had been defined to have multiple record types. The email address will be in the last record read by the transformation, of course. If your requirements dictate that the email address be available as actual purchase records are processed, then you will have to use other techniques in a more complex transformation.

Page 114: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 114

Extracting Fixed Field Definitions

Keywords: Extract Schema Designer: Multiple Fields per Line Style (fixed)

Description

The next file that we will be parsing is Purchases_Mail.txt. We should take a look at it in a text viewer. Although it might be possible to use this report file as a direct input for a transformation, we would have to define it as a multiple-record-type file. Although there are fewer record types than with the phone purchases we dealt with earlier, there are still enough that when combined with the extra processing logic involved, the job would become tedious. So, again, what we plan to do is use the Extract Schema Designer to create an extract specification that will transform the report file into a more familiar row/column format, and then use that formatted data as input to the transformation that adds these purchases to the database table. As before, we don’t require multiple passes of the input file. We will just create the extract schema and apply it to the input on the Source tab of our eventual transformation.

Exercise

1. From the Repository Explorer, select New Object>Extract Schema.

2. At the prompt, navigate to the file you will be working with, in this case, Purchases_Mail.txt.

3. In the Source Options dialog, on the Extract Design Choices tab, set the Tag Separator to “Colon-Space.” Also on this tab, ensure that the Trim Leading and Trailing Spaces checkbox is selected.

4. On the Display Choices tab, ensure that the Pad Lines checkbox is selected.

5. Choose OK to accept the selections.

6. Highlight the entire Account Number line in the data.

7. Right-click in the highlight and select Define Data Field>Parse Tagged Data.

8. Highlight the label Purchase Order Number.

9. Right-click in the highlight.

10. Select Define Line Style>New Line Style.

11. Change the Line Style Name to PONumber.

12. Choose Add.

13. Highlight the Purchase Order Number tag and the data following it.

14. Right-click in the highlight.

15. Select Define Data Field>Parse Tagged Data.

16. Define the PO_Date Field using the same technique

17. Define the Category Line Style and the three Fields on it using the same technique.

18. Define the Unit Cost Line Style and the three Fields on it using the same technique.

Page 115: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 115

19. Define the Line Style that determines the end of a row of data for the Extract File.

20. Locate the Line Style that contains the Field that will be the last column in each row in the eventual extract file (in this case, Unit_Cost).

21. Double-click on the Line Style name to bring up the Line Style Definition dialog.

22. On the Line Action tab, choose ACCEPT Record, and accept the remaining defaults.

23. Choose Update.

24. Click on the Browse Data Record button.

25. Choose OK to allow assignment of all Fields to the Extract File.

26. Examine the data to ensure that your Field definitions are correct.

27. Close the browser window.

28. Ensure that the Fields are in the order they appear in the input data.

29. Save the Extract Schema Design as Purchases_Mail.cxl.

30. Close the Extract Schema Designer.

31. Remember that this schema can be used as part of a source connection in Map Designer.

Page 116: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 116

Extracting Variable Fixed Field Definitions

Keywords: Extract Schema Designer: Multiple Fields per Line Style (variable)

Description

The next file that we will be parsing is Purchases_Fax.txt. We can examine it in a text viewer. Notice that this file has fields with variable lengths so that any given field may not occupy the same column position as it did in the previous record. What we plan to do is use the Extract Schema Designer to create an extract specification that will transform the report file into a more familiar row/column format, and then use that formatted data as input to the transformation that adds these purchases to the database table. As before, we don’t require multiple passes of the input file. We will just create the extract schema and apply it to the input on the Source tab of our eventual transformation.

Exercise

1. From the Repository Explorer, select New Object>Extract Schema.

2. At the prompt, navigate to the file you will be working with, in this case, Purchases_Fax.txt.

3. In the Source Options dialog, choose OK to accept the defaults.

4. Highlight the literal Order Header and right-click in the highlight.

5. Select Define Line Style>Auto New Line Style>Action - Collect fields.

6. Highlight an Account Number and Right-click in the highlight.

7. Select Define Data Field>New Data Field.

8. Change the Field Name to AccountNumber.

9. Choose Floating Tag.

10. Enter the tag “Account Number(“.

11. Use first tag starting at position 0.

12. Choose Floating Tag.

13. Enter the tag “)” (a single closing parenthesis).

14. Use first tag starting at position 0.

15. Choose Add.

16. Highlight a PO Number and right-click in the highlight.

17. Select Define Data Field>New Data Field.

18. Change the Field Name to PONumber.

19. For the Start Rule, select the first floating tag of “PO Number(“ starting at position 0.

20. For the End Rule, select the first floating tag of “)” starting at position 0.

21. Choose Add.

Page 117: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 117

NOTE: When working with Floating Tags, the starting position for the End Rule is relative to the beginning of the Field being defined- not the beginning of the record. So even though the closing parenthesis for the PONumber is the second one from the beginning of the file, it is only the first one from the beginning of the PONumber.

22. Highlight a PO Date, right-click and select Define Data Field>New Data Field.

23. Change the Field Name to PODate.

24. For the Start Rule, select the first floating tag of “PO Date: ” starting at position 0. Please note that there is a space after the colon.

25. For the End Rule, choose End of Line.

26. Choose Add.

27. Highlight the literal Item and right-click in the highlight.

28. Select Define Line Style>Auto New Line Style>Action - Collect fields.

29. Highlight a Category and right-click in the highlight.

30. Select Define Data Field>New Data Field.

31. Change the Field Name to Category.

32. Choose Add.

33. Highlight a Product Number and right-click in the highlight.

34. Select Define Data Field>New Data Field.

35. Change the Field Name to ProductNumber.

36. For the Start Rule, select the first floating tag of “ /” starting at position 0.

37. For the End Rule, select the first floating tag of “ ” (a single space) starting at position 0.

38. Choose Add.

39. Highlight a Quantity, right-click and select Define Data Field>New Data Field.

40. Change the Field Name to Quantity.

41. For the Start Rule, select the third floating tag of “ “ (a single space) starting at position 0.

42. For the End Rule, select the first floating tag of “/” starting at position 0.

43. Choose Add.

44. Highlight a Unit Cost, right-click and select Define Data Field>New Data Field.

45. Change the Field Name to UnitCost.

46. For the Start Rule, select the second floating tag of “ / “ starting at position 0.

47. For the End Rule, select the first floating tag of “/ ” starting at position 0.

48. Choose Add.

49. Highlight a Shipment Method Code, right-click and select Define Data Field>New Data Field.

50. Change the Field Name to ShipmentMethodCode.

Page 118: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 118

51. For the Start Rule, select the third floating tag of “ / “ starting at position 0.

52. For the End Rule, choose End of Line.

53. Choose Add.

54. Locate the Line Style that contains the Field that will be the last column in each row in the eventual extract file (in this case, Item).

55. Double-click on the Line Style name to bring up the Line Style Definition dialog.

56. On the Line Action tab, choose ACCEPT Record, and accept the remaining defaults.

57. Click Update.

58. Click on the Browse Data Record button.

59. Choose OK to allow assignment of all Fields to the Extract File.

60. Examine the data to ensure that your Field definitions are correct.

61. Close the browser window.

62. Ensure that the Fields are in the order they appear in the input data.

63. Save the Extract Schema Design as Purchases_Fax.cxl.

64. Close the Extract Schema Designer.

65. Remember that this schema can be used as part of a source connection in Map Designer.

Page 119: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 119

EasyLoader

EasyLoader is a flat simple mapper that creates intermediate data load files. It supports single record source data and creates single record, flat, data load files. All targets are predefined by the system. There is a Programmers Guide describing how to create a predefined target. Section 3 below will walk you through creating a simple target. All events and actions required at run time are automatically created and hidden from view at design time.

Related Documents:

1. Easy_Loader.pdf – a document describing how to use Easy Mapper.

2. Easy_Loader_Dev_Guide.pdf– a programmer’s document describing how to create targets for use with EasyLoader

In a default installation, these files are located here:

C:\Program Files\Pervasive\Cosmos\Common800\Help

Installing EasyLoader –

EasyLoader is being installed with version 8.12 or newer. You will need to install this version of the Integration products.

Page 120: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 120

Overview & Introductory Presentation

Easy Loader is a one-to-one record flat file mapper that creates intermediate data load files. This means that Easy Loader supports single record Source data and creates single record flat data load files. All Target type connectors and schemas are predefined, which makes the user interface easy to learn.

In Easy Loader, all events and actions required at run time are automatically created and hidden from view at design time. This is an advantage when the end user is not proficient with the Map Designer tool. The idea is that the designer will create most of the Map at design time, leaving the simple mapping of the source into the target to the end user. Below is a list of more Easy Loader advantages over Map Designer:

• predefined Targets and Target schemas • automatic addition of events and actions needed to run • simplified mapping view • auto-matching by name (case-insensitive; Map Designer is case sensitive) • single field mapping wizard launched from the Targets fields grid (use wizard to map all

fields, or just one field at a time) • predefined Target record validation rules • specific validation error logging for quick data cleansing

Easy Loader requires a pre-defined Structured Schema and it creates a tf.xml and a map.xml file that can be used in the same ways that Transformations built in Map Designer are used.

There are courseware and exercises on the Easy Loader in this document.

EasyLoaderTraining.ppt

Page 121: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 121

Using EasyLoader to create/run maps – using wizard

Objectives

At the end of this lesson you should be able to create, run and save a map using EasyLoader’s new map wizard. You will see the following EasyLoader advantages:

Ease of use

Predefined targets and target schemas

Automatic addition of events and actions needed to run

Simplified mapping view

Easier mapping

Walk through every step mapping wizard

Auto matching by name (case insensitive)

Fuzzy matching to show most likely source field matches

Field transformations through function building wizard

Dirty data detection

Predefined target record validation rules

Very specific validation error logging for quick cleansing

Description

The new map wizard will walk you through selecting a target, connecting to source data, then mapping target fields one by one until all fields have been mapped.

Selecting a Target

All targets for EasyLoader are predefined. You will be given a choice of only those defined on your system. We will discuss how to create targets for use with EasyLoader in a later section.

Connecting to the Source Data

EasyLoader only supports single record type sources that already contain metadata. While you can use a predefined schema for metadata, EasyLoader does not allow you to create a schema. If you connect to a source that contains multiple record types, an error will occur.

Mapping the Target Fields

The wizard will walk you through a series of dialogs for each target field. You will be given the choice to map the target field to a source field, a constant, or Null (No Map). If mapped to a source field, you will have an additional option of building a simple field transformation using the chosen source field. Field transformations through the wizard allow you to choose functions only. If you need something more complicated (ie. flow control statements), you will need to use the Expression Editor via grid level mapping (see section 2 below).

Exercise Steps

1. Launch EasyLoader

2. Click on the New toolbar button

Page 122: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 122

Selecting the target:

3. Select School Messenger as the target and accept Student as the target schema (Press Next)

4. Accept the default target file location and name (Press Next)

Connecting to the source:

5. Accept/Select Ascii (Delimited) as the source (Press Next)

6. Enter Tutor1.asc as the source file name and set the Header=true property. Don’t forget to press apply after setting the header property. (Press OK)

7. Make sure the source data looks right (Press Next)

Mapping Loop:

8. Target field 1 (School Name) – map to constant obtained from default field expression

9. Accept the wizard mapping option (Press Next)

10. Accept “Cat Hollow Elemetary” as the constant map to School Name (Press Next)

11. You can navigate through the records if you want but the map is to a constant so the mapping results for all records will be the same. Accept the Done option (Press Next)

12. Target field 2 (School Number) – map to source field with transformation

13. Accept the wizard mapping option (Press Next)

14. Erase the 14006 value from the Constant textbox. Accept the Show All source field list option and drop down the source field list. Notice that the second column shows some fields that were already mapped. This is because EasyLoader does a match by name for you prior to entering the mapping loop. Select Account No as the source field map to School Number. (Press Next)

15. Navigate through some records to view the mapping results. Select the Field Transformation option (Press Next)

16. Select the Left$ function (Press Next)

17. You will be taken into the function builder with the source field filled into to one of the function parameters. Enter 2 into the second (Length) parameter. (Press OK)

18. Navigate through some records to view the mapping results which should be the left 2 digits of the Account No source field. Accept the Done option (Press Next)

19. Target field 3 (Student lastname) – map to source field

20. Accept the wizard mapping option (Press Next)

21. Select the Show Most Likely source field list option and drop down the source field list. EasyLoader has detected a couple source field names that might match the target. If you do not see the one you need in this list, you can select the Show All source field list option. Select Last Name (Press Next)

22. Navigate through some records to view the mapping results. Accept the Done option (Press Next)

23. Target fields 4 and 5 (Student firstname, Student Address1)

Page 123: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 123

24. Repeat the above steps for target fields 4 & 5 selecting source fields First Name, then Address.

25. Target field 6 (Address2)

26. Accept the wizard mapping option (Press Next)

27. Select the checkbox for No Mapping (Press Next)

28. Notice the Null mapping results. Accept the Done option (Press Next)

29. Target fields 7-9 (City, State and Zip)

30. Move through the mapping dialogs for these fields by continuing to press Next. These fields have already been mapped for you.

31. Target fields 10 & 11 (Home Tel and Mobile Tel)

32. Move through the dialogs for these fields selecting the No Map checkbox.

33. Target field 12 (Age) – map to source field with transformation

34. Accept the wizard mapping option (Press Next)

35. Accept the Show All source field list option and drop down the source field list. Notice that the second column shows all previously mapped source fields. This does not mean you can not map the source field to another target field. Select Account No as the source field map to Age. (Press Next)

36. Navigate through some records to view the mapping results. Select the Field Transformation option (Press Next)

37. Select the Right$ function (Press Next)

38. You will be taken into the function builder with the source field filled into to one of the function parameters. Enter 2 into the second (Length) parameter. (Press OK)

39. Navigate through some records to view the mapping results which should be the right 2 digits of the Account No source field. Accept the Done option (Press Next)

40. Target fields 13 & 14 (Grade Level and Gender) – map to constant

41. Accept the wizard mapping option (Press Next)

42. Enter the value 4 into the constant textbox (Press Next)

43. You can navigate through the records if you want but the map is to a constant so the mapping results for all records will be the same. Accept the Done option (Press Next)

44. Repeat for Gender but enter 2 (for Male)

45. Target field 15 (Home Language) – map to constant from default field expression

46. Accept all the defaults here (Press Next)

47. Press OK to exit the wizard

Saving and Running the Map

48. Press the save toolbar button and save the map. Notice that it is the same map that Map Designer saves.

49. Press the run toolbar button to run the map. Notice that not all the records converted.

Page 124: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 124

50. Press the logfile toolbar button to view the log. EasyLoader predefined targets come with very detailed record validation rules. Page through the log files to view why some of the records did not convert.

Summary

This exercise allowed you to create a map using the EasyLoader new map wizard. You were then able to save, run the map, and view any data validation errors.

Page 125: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 125

Using EasyLoader to create/run maps – without wizard

At the end of this lesson you should be able to create, run and save a map using WITHOUT using EasyLoader’s new map wizard. You will see the following EasyLoader advantages:

Ease of use

Predefined targets and target schemas

Automatic addition of events and actions needed to run

Simplified mapping view

Easier mapping

Auto matching by name (case insensitive)

Single field mapping wizard launched from the target fields grid

Dirty data detection

Predefined target record validation rules

Very specific validation error logging for quick cleansing

Description

It is not necessary for you to use the new map wizard to create a map in EasyLoader. There are new toolbar buttons that allow you to select a target and connect to source data. Grid level mapping is available through either the target field expression drop down or a new single field mapping wizard.

Selecting a Target

All targets for EasyLoader are predefined. You will be given a choice of only those defined on your system. We will discuss how to create targets for use with EasyLoader in the next section.

Connecting to the Source Data

EasyLoader only supports single record type sources that already contain metadata. While you can use a predefined schema for metadata, EasyLoader does not allow you to create a schema. If you connect to a source that contains multiple record types, an error will occur.

Mapping the Target Fields

Grid level mapping can be done four ways:

Click on the new single field map wizard button in the column to the right of the target field expression. You will need to click once for the button to get the focus, then again to launch the single field map wizard.

Drop down the target field expression column and select a source field.

Drop down the target field expression column and select <Build Expression…>

Click inside the target field expression cell and enter a field expression.

Exercise Steps

1. Launch EasyLoader

2. Click on the Target Connection toolbar button

Page 126: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 126

Selecting the target:

3. Select School Messenger as the target and accept Student as the target schema (Press Next)

4. Accept the default target file location and name (Press Next)

5. At this point you will exit the target selection process and be back on the main screen where you will see the target schema.

Connecting to the source:

6. Click on the Source Connection toolbar button

7. In the source connection dialog, click on the drop down arrow beside the connection textbox and choose factory connection: ASCII (Delimited)

8. Enter Tutor1.asc as the source file name and set the Header=true property. Don’t forget to press apply after setting the header property. (Press OK)

9. At this point you will exit the source connection dialog and be back on the main screen where you will see that we have added a couple of target field expressions. These are the ones found by EasyLoader while doing a case-insensitive match by name.

10. If the test expression panel is not showing in the lower right hand corner of the main window, go select the View, Field Expression Results so that you can navigate through the source records viewing your mapping results in the target field Results column at any time.

Mapping the rest of the target fields:

11. Target field 3 (Student lastname) – map to source field

12. Drop down the target field expression column for this field and select Fields(“Last Name”)

13. Target fields 4 and 5 (Student firstname, Student Address1)

14. Repeat the above step for target fields 4 & 5 selecting source fields First Name, then Address.

15. Target field 6 (Address2)

16. Leave blank

17. Target fields 7-9 (City, State and Zip)

18. These are already mapped for you.

19. Target fields 10 & 11 (Home Tel and Mobile Tel)

20. Leave blank.

21. Target field 12 (Age) – map to source field with transformation

22. Click on the wizard wand button in the column to the right of this target field expression. Clicking once sets the focus on this button.

23. Click on the button again to launch the single field mapping wizard. This is the same mapping wizard you used in section 1 above.

Page 127: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 127

24. Click on the Show All source field list option and drop down the source field list. Notice that the second column shows all previously mapped source fields. Select Account No as the source field map to Age. (Press Next)

25. Navigate through some records to view the mapping results. Select the Field Transformation option (Press Next)

26. Select the Right$ function (Press Next)

27. You will be taken into the function builder with the source field filled into to one of the function parameters. Enter 2 into the second (Length) parameter. (Press OK)

28. Navigate through some records to view the mapping results which should be the right 2 digits of the Account No source field. Accept the Done option (Press Next)

29. This will put you back on to the main form with the newly created field expression in the grid. The row has a pencil image in the row selector column meaning that the changes have not been committed yet. Click on the row above this one to commit the changes.

30. Target field 13 (Grade Level) – map to constant

31. Click inside the target field expression cell for this field.

32. Type in “4” then click on the row above to commit the change.

33. Target field 14 (Gender Level) – map to complex expression

34. Drop down the target field expression column for this field and select <Build Expression…>

35. In the expression editor, click on Flow Control in the bottom left tree. Then double click on If…Then…Else in the bottom right grid. You will see the following in the expression text box:

If condition then

statement block1

Else

statement block2

End if

36. Highlight the word condition and then select R1 under Source in the bottom left tree. Then double click on Payment. Then add < 300.

37. Replace statement block1 with return “1”

38. Replace statement block2 with return “2”

39. You will see the following in the expression text box:

If Records("R1").Fields("Payment") < 300 then

return "1"

Else

return "2"

End if

40. Press OK

Page 128: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 128

41. Click on the row above this field expression to commit your changes. NOTE: the above transformation makes no sense for Gender, but it shows how to create a complex expression when you need to.

42. Navigate through the source records to view your mapping results.

Saving and Running the Map

43. Press the save toolbar button and save the map. Notice that it is the same map that Map Designer saves.

44. Press the run toolbar button to run the map. Notice that not all the records converted.

45. Press the logfile toolbar button to view the log. EasyLoader predefined targets come with very detailed record validation rules. Page through the log files to view why some of the records did not convert.

Summary

This exercise allowed you to create a map WITHOUT using the EasyLoader new map wizard. You were then able to save, run the map, and view any data validation errors.

Page 129: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 129

Creating Targets for use with Easy Loader

Objectives

At the end of this lesson you should be able to create targets for use with EasyLoader.

Description

EasyLoader works with Predefined targets found in a Targets folder off of the InstallDir\Common800 subdirectory. We ship several samples including the School Messenger target used in sections 1 and 2 above.

In order to create a target for use within EasyLoader, you have to do the following:

Create a new subdirectory under the Targets folder. The name of the subdirectory should be the name of the target you want to create.

Create a target connection file for this new target (targetname.tc.xml)

Create one or more target schemas for this target (schemaname.ss.xml)

Create target record validation rules for your target schemas.

Place all files created for the target in the new subdirectory created in step 1.

Launch EasyLoader and verify your target is there and it works.

Exercise Steps

Create a target folder

1. Open up your windows explorer and navigate to your Cosmos InstallDir\Common800\Targets subdirectory. NOTE: If you do not have a Targets subdirectory, you have not installed EasyLoader properly.

2. Create a new folder called MyTarget

Create a target connection file

3. Launch Map Designer and click on the target tab

4. Click on the drop down arrow to the right of the connection textbox and select the factory connection Excel 95. (Press OK)

5. Change the HeaderRecordRow property to 1 and press Apply.

6. Type Sheet1 into the Sheet textbox.

7. Click on the Target Connection Properties button (upper right most button to the right of the target connection textbox). Change the Author to Factory and give the connection a description. Click OK to exit.

8. Click on the save target connection icon (top right row of buttons, second from the right)

9. Type MyTarget and press OK. This saves MyTarget.tc.xml in the user directory.

10. Move this file to your newly created MyTarget subdirectory under Common800\Targets

Create a target schema file

11. While in Map Designer with your target connection defined above, click on the map tab.

12. Move the horizontal (source/target) splitter bar up so that you mostly see the target information. You will being doing work for the target only.

Page 130: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 130

13. Click on the Record Types target tree node. In the grid to the right you should see one record type named R1. Rename this record to the name of the schema you plan to create. For example, if your schema is a Customer record, rename the record Customer. After changing the R1 name, click on the next grid cell over to commit your changes.

14. Click on the newly named RecordName Fields target tree node (ie. Customer Fields). The grid to the right is for entering the target schema fields. NOTE: we are going to manually enter the schema fields. If you have a schema that you want to import, you can stay on the Record Types node and follow Map Designer’s instructions for using the Schema Origin column in the record grid to import a schema. If you do this, remember that target MUST be single record, flat targets. If you have a target schema that is hierarchical, you’ll need to flatten it out into 1 record type.

15. When creating a schema for use with EasyLoader, it is important to do the following:

Use easy to understand field names

Add very specific field descriptions including any limitations (ie. Max value 1000 or Possible values are M and F).

Set the “Field Required” and “Default Expr” field properties if applicable.

If Boolean datatype, be sure to enter the “Picture” field property.

16. Add the following fields to your Customer record (Name, Desc, Dtype, Size, Required, Default Expr):

Name, Customer name (First and Last separated by space or it could be a business name.,Text, 100, Yes)

Country, Customer country of residence. Possible values are USA, Canada, Mexico., Text, 25, Yes, “USA”

IsActive, True indicates customer is an active account. Possible values are 0 (for false) or 1 (for true)., Numeric, 1, No, “1”

17. At this point we are going to save the schema. Click on the Record Types target tree node again. Then right mouse and select Save Schema As… When the dialog comes up, name the schema Customer. Press OK. This will create Customer.ss.xml.

18. Notice when you are done that the Customer Fields node has a lock on it. Click on this tree node and right mouse, then select Unlock Schema. We want to edit it some more.

Create Record Validation Rules for your schema

19. Click on the Customer Rules target tree node to expand it. Then Click on the Customer Validation target tree node. You will see a grid to the right allowing for ONE row of RIFL validation code. NOTE: If your validation rules approach the 32767 character limit, you will need to create functions out of your rules, store the functions in a rifl code module and call the functions from within this Record Validation Rule expression. See the Programmer’s manual on how to do this.

20. Click on the … button inside the grid to take you out to the expression editor for making the validation rules. The following was taken directly from the Programmers Guide describing the format your validation rules should take.

The record validation rules should be in the following format:

Beginning:

Page 131: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 131

A comment that specifically reads: ‘TargetName_Schema Validation Rules

Dim any local variables you intend to use in your rifl validation expression including a Boolean value to return at the end indicating if the record is valid or not AND a record identifier variable.

Some code that initializes the boolean variable to return at the end and initializes a record identifier to use when logging validation errors. The record identifier variable will only be set once. Think of it as the record’s key.

Middle:

1 to N validation rules

End:

Return validation boolean variable

Aside, from this format, your validation logic has access to a global “reccnt” variable that will hold the value for the current record being read and transformed. Your validation logic should check for validation errors and when found, use the reccnt and fldid (record identifier) variables to log a very descriptive validation error.

An example for our Customer record might look like this:

21. Enter the following into the expression editor:

'MyTarget_Customer Validation Rules

Dim isvalidrecord ‘the boolean validation variable to return in the end

Dim temp ‘a temporary variable

Dim fldvalue ‘a variable to hold a target field value

Dim fldid ‘a variable to hold the target record identification value

isvalidrecord = true ‘Initializes the variable

fldid = Targets(0).Records(0).Fields("Customer") ‘set record key for use in LogMessage

'Check Customer record not blank or null

fldvalue = Targets(0).Records(0).Fields("Customer")

If (fldvalue == "" Or IsNull(fldvalue)) then

Logmessage("WARN", "MyTarget_Customer VALIDATION WARNING--->Record: " & reccnt & ", Customer: " & fldid & " has invalid Customer value(“ & fldvalue & “). It should not be blank or null.")

isvalidrecord = false

Page 132: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 132

End if

'Check Country is USA, Canada or Mexico

fldvalue = Targets(0).Records(0).Fields("Country")

Select Case fldvalue

Case “USA”, “Canada”, “Mexico”

Temp = 1 ‘needed because case must have a statement; ignored

Case else

Logmessage("WARN", "MyTarget_Customer VALIDATION WARNING--->Record: " & reccnt & ", Customer: " & fldid & " has invalid Country value (“ & fldvalue & “). It should be USA, Canada or Mexico.")

isvalidrecord = false

End if

'Check IsActive is 0 or 1 (for false or true)

fldvalue = Targets(0).Records(0).Fields("IsActive")

If (fldvalue <> "0" And fldvalue <> “1”) then

Logmessage("WARN", "MyTarget_Customer VALIDATION WARNING--->Record: " & reccnt & ", Customer: " & fldid & " has invalid IsActive value (“ & fldvalue & “). It should be 0 or 1 for false or true.")

isvalidrecord = false

End if

'Done, return boolean value

return isvalidrecord

22. Click the validate toolbar button in the expression editor to make sure the expression you have typed is valid. Fix any errors.

23. Click OK

24. Click back on the Record Types target tree node, right mouse and select Save Schema As… to resave the schema. Overwrite Customer.ss.xml saved previously with no record validation rules.

NOTE: at this point you should test what you have written by connecting to a source, entering a target excel file name, map some fields, save the map and run it. Open the log file to see if you got the validation error results you expect. Any problems should be corrected and the schema should be resaved. Don’t forget to Unlock Schema every time after a save to allow for continued editing.

25. Once you have fully tested your schema, move Customer.ss.xml file to the Targets\MyTarget subdirectory created above.

Page 133: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 133

Testing your target within EasyLoader

26. Launch EasyLoader

27. Click on the New Map toolbar button to launch the wizard

28. Choose your target and schema (MyTarget, Customer)

29. Continue through the wizard as in section 1 above, connecting to a source and mapping your 3 fields.

30. Save and run the map. Compare the results with the test your ran above in map designer when testing the schema. If problems arise, go back into Map Designer to edit the schema.

NOTE: schema creation can also be done inside Structured Schema Designer. But if your validation rules approach the max character limit where you have to start creating a RIFL code module to house your validation rule functions, you can only create code modules within Map Designer.

Summary

This training module taught you how to create a target and a target schema for use within EasyLoader. You can create as many target schemas as you like (ss.xml files) for a give target and you can create as many different targets as you want to support.

Page 134: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 134

Process Designer for Data Integrator

Process Designer is a graphical data transformation management tool you can use to arrange your complete transformation project. With Process Designer, you can organize Map Designer Transformations with logical choices, SQL queries, global variables, Microsoft's DTS packages, and any other applications necessary to complete your data transformation. Once you have organized these Steps in the order of execution, you can run the entire workflow sequence as one unit.

IntegrationArchitect_ProcessDesigner.ppt

Page 135: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 135

Process Designer Fundamentals

The heart of the integration product tool set is Map Designer. This is where the work is done to get data from one format, layout, or application to another. Process Designer integrates the Transformations created in the Map Designer with any other applications or processes that need to be done to complete the job.

Map Designer can be called from within the Process Designer using the Transformation dialog box or Right-Click Menu shortcuts. You can create entirely new Transformations, use existing Transformations, and/or copy original Transformations before using the “.tf.xml” file in a Transformation Step. The original Transformation file information remains unchanged. The Process Designer can be used from beginning to end to make your data Transformation task simpler and more streamlined.

Page 136: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 136

Creating a Process

Objectives

At the end of this lesson you should be able to create a simple Process Design.

Keywords: Process Designer, Transformation Map, and Component

Description

To create a new Process, first consider what is necessary to accomplish the complete transformation of your data. Form a general idea of the logical steps to reach your goal, including which applications you need, and what decisions must be made during the Process.

Once you have a good idea of what will be involved, open Process Designer (via the Start Menu or Repository Explorer) and begin. Remember that Process Steps can be re-arranged, deleted, added, or edited as you build your design.

Exercise

1. Open Process Designer.

2. Add a Transformation step to the Process Design.

3. Right Click on the Transformation Map and choose Properties.

4. Click Browse and choose OutputModes_Clear_Append.map.xml from a previous exercise or from the solutions folder.

Note: A Process Designer SQL Session is a particular method of connecting to the given SQL application's API. We can use the same session in multiple steps or create new sessions wherever needed. We must have at least one session if any connection to a relational database is made during the process.

Page 137: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 137

5. Let’s accept the default here by clicking OK.

6. Name this step Load_Accounts.

7. Add another Transformation step to the Process Design.

8. Right Click on the Transformation Map and choose Properties.

9. Click New to open the Map Designer.

10. Create a new map that loads Category.txt into the tblCategories table in the TrainingDB Database. Use the report below for specifications.

(ASCII (Delimited))

location $(funData)Category.txt

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Page 138: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 138

Target (ODBC 3.x)

Database TrainingDB

table tblCategories

Outputmode Clear File/Table contents and Append

Map Expressions

R1.Code Fields("Field1")

R1.Category Fields("Field2")

R1.ProductManager Fields("Field3")

11. Accept the default for the Transformation Step dialog.

12. Choose “Use an existing session for the target” in the Sessions Dialog.

13. Name step Load_Categories.

14. Create a new map that loads ShippingMethod.txt into the tblShippingMethod table in the TrainingDB Database. Use the report below for specifications.

(ASCII (Delimited))

location $(funData)ShippingMethod.txt

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Target (ODBC 3.x)

Database TrainingDB

table tblShippingMethod

Page 139: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 139

Outputmode Clear File/Table contents and Append

Map Expressions

R1.Shipping Method Code Fields("Shipping Method Code")

R1.Shipping Method Description Fields("Shipping Method Description")

15. Accept the default for the Transformation Step dialog.

16. Choose “Use an existing session for the target” in the Sessions Dialog.

17. Name step Load_ShippingMethod.

18. Establish the Step Sequence.

19. Validate the Process Design.

20. Save the Process as Load_Tables.

21. Run the Process Design.

22. Examine the Target Tables.

Page 140: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 140

There follows some information taken from reports generated by Repository Manager from the Load_Tables process in the Solutions folder:

Start (Start)

LoadAccounts (Transformation)

LoadCategories (Transformation)

processname LoadtblCategories.map.xml

Predecessors

LoadAccounts Unconditional

processname ../MapDesigner_TransformationFundamentals/OutputModes_Clear_Append.map.xml

targetsession ODBC3x-1

Predecessors

Start Unconditional

Page 141: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 141

LoadShippingMethod (Transformation)

processname LoadtblShippingMethod.map.xml

Predecessors

LoadCategories Unconditional

Stop (Stop)

Predecessors

LoadShippingMethod Unconditional

Page 142: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 142

Conditional Branching – The Step Result Wizard

Objectives

At the end of this lesson you should be able to add a conditional statement to the Decision Step

Keywords: Error Handling; Conditional Branching; Me tadata Execution Variables – Step Result Wizard

Description

The Decision Step allows you to design a conditional expression, which will decide which Step in the Process will be executed next.

The Decision Step allows you to set up a conditional evaluation to make logical choices between possible workflow paths. Generally, this is done with a Boolean expression.

Exercise

1. Open Process Designer.

2. Add a Transformation step to the Process Design.

3. Right Click on the Transformation Map and choose Properties.

Page 143: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 143

4. Click Browse and choose Reject_Connect_Info.map.xml from a previous exercise or from the solutions folder.

5. Accept the default for the Transformation Step and the Sessions dialog.

6. Name step LoadAccounts_CheckDates.

7. Add a Decision step to the Process Design.

8. Right-click on the Decision icon and select Properties.

9. Name the step Eval_RejectRecordCount.

10. Using the Step Result Wizard, create and add the following code:

project("ZipCode").RejectRecordCount > 0

11. Click OK to close.

12. Add a Scripting step to the Process Design.

13. Right-click on the Scripting icon and select Properties.

14. Use NotificationBadDates as the Step Name.

15. Use the Build button to build an expression that will display “There are STILL invalid dates!!" in a message box with a stop icon and an OK button and the title “Invalid Date Warning”

MsgBox("There are STILL invalid dates!!", 16, "Invalid Date Warning")

16. Click OK to close.

17. Link the Start step to the Transformation step

18. Link the Transformation step to the Decision step

19. Link the Decision step to the Stop step (this path should be followed if the Decision evaluates to “False”)

20. Link the Decision step to the Scripting step (this path should be followed if the Decision evaluates to “True”)

21. Link the Scripting step to the Stop step

22. Validate the Process Design

23. Save your Process Design as ConditionalBranching_StepResultWizard.ip.xml

24. Run the Process Design.

There follows some information taken from reports generated by Repository Manager from the ConditionalBranching_StepResultWizard process in the Solutions folder:

Sessions

ODBC3x-1 (ODBC 3.x)

Database TrainingDB

Page 144: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 144

Steps

Start (Start)

LoadAccounts_CheckDates (Transformation)

processname ../MapDesigner_TransformationFundamentals/Reject_Connect_Info.map.xml

targetsession ODBC3x-1

Predecessors

Start Unconditional

Stop (Stop)

Predecessors

EvalRejectRecordCount False

NotificationBadDates Unconditional

EvalRejectRecordCount (Decision)

Project("LoadAccounts_CheckDates").RejectRecordCount > 0

Predecessors

LoadAccounts_CheckDates Unconditional

NotificationBadDates (Scripting)

MsgBox("There are STILL invalid dates!!", 16, "Invalid Date Warning")

Predecessors

EvalRejectRecordCount True

Stop (Stop)

Predecessors

EvalRejectRecordCount False

Page 145: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 145

NotificationBadDates Unconditional

Page 146: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 146

Parallel vs. Sequential Processing

Objectives

At the end of this lesson you should be able to create a parallel process.

Keywords: Multi-threaded, Single-threaded

Description

Integration Engine can execute a single-threaded or multi-threaded processes, depending on your license.

Process Designer now utilizes the power and speed of multithreading when running a Process. If you own the multithreaded Integration Engine, you can allow the operating system to control the load balancing across CPUs for more efficient processing. This will even work to spool multiple threads off of a single processor.

If you set up the Maps within your Process to run in parallel, the Process Designer will launch each Map in parallel on its own thread. There is no need to code anything within your Maps or Processes. It is all done for you behind the scenes as long as you set the Max Concurrent threads property in the Process Design.

Multithreading allows parallel execution of Process Designer Steps, where several Transformation Steps in Process Designer can be simultaneously executed across multiple CPUs on a server.

Exercise

1. Open the “CreatingAProcess” process from a previous exercise or from the solutions folder.

2. Run it and check the log file for the length of time the process took to run.

3. Make changes in the way the steps are linked as shown in the figure below.

4. Open the Process Properties Dialog and set “Max Concurrent Execution Threads” to 3.

5. Validate the process, and then save as ParallelProcessing.ip.xml.

6. Run it and check the log file for the length of time the process took to run.

Page 147: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 147

Page 148: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 148

FileList - Batch Processing Multiple Files

Objectives

At the end of this lesson you should be able to build a Filelist that gathers a list of file names and stores them in an array variable.

Keywords: Change Source Action, “NUL:” connection s tring, File List Function, Array Variables, and Looping

Description

Builds a list of user-specified file types.

Returns a 'Type Mismatch' error if the results parameter is not an array.

The FileList Function returns both file AND DIRECTORY names within a given directory. If you want to work only with file names, you will need to test the return names using the IsFile Function to determine which files you want to use.

Note: You cannot use FileList to return a list of files via FTP.

Exercise

1. Create process variables as described below.

Page 149: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 149

Variables

myFiles ()

This array contains a list of file names passed from the FileList function.

Variant(0)

myFileCounter ()

This variable is used for the index of the myFiles array.

Variant -1

Page 150: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 150

myPath ()

This variable used to store the path of the "inbox" directory. Consider using a lookup or user input to change this programmatically.

Variant

myCurrentFile ()

This variable used in the ChangeSource action within the Map step. The map

initially points to a "NUL:" source file. This will change it to the next/current file

name in the array.

Variant

2. Put a Transformation step onto the Canvas. Browse to the OutputModes_Clear_Append.map.xml from a previous exercise or from the solutions folder.

3. Accept the default in the “Sessions” dialog.

4. Name the step “LoadAccountsTable”.

5. Put in a scripting step as described below:

Page 151: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 151

BuildFileList (Scripting)

' Set directory for incoming files.

' Consider using lookup or user input for this value.

myPath = MacroExpand("$(funData)") & "InBox\"

' Gather list of file names. Use wildcards if needed.

FileList(myPath & "AddrChg*.*", myFiles())

' Set array index counter (Zero based).

myFileCounter = UBound(myFiles)

Predecessors

LoadAccountsTable Unconditional

6. Put in a decision step as described below:

GotFiles? (Decision)

myFileCounter > -1

Predecessors

BuildFileList Unconditional

FileCounter Unconditional

7. Put in scripting step as described below:

Notification_NoFiles (Scripting)

MsgBox("No Files to Process: Exiting")

Predecessors

GotFiles? False

8. Put in a scripting step as described below:

SetCurrentFile (Scripting)

' Trap runtime errors (eg, Array Index Out of Bounds)

ON ERROR GOTO myError

Page 152: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 152

' Set var for use in Map.

' This var will be used in ChangeSource action.

myCurrentFile = myPath & myFiles(myFileCounter)

' Verification...

Dim A

A = Ubound(myFiles) - myFileCounter

MsgBox("File name = " & myFiles(myFileCounter) & "

" & "File " & A + 1 & " of " & Ubound(myFiles)+1)

' Use Return statement to exit this module

Return

' Error handler

myError:

' Get the error info and check variable values

MsgBox("Err.Number = " & Err.Number & "

" & "Err.Description = " & Err.Description & "

" & "myPath=" & myPath & "

" & "myFileCounter=" & myFileCounter & "

" & "myFiles(0)=" & myFiles(0))

' This might only be terminating the step...

LogMessage("ERROR","err.number = " & Err.number)

Terminate()

Predecessors

GotFiles? True

9. Put in a Transformation step and click “New”. Then build a map to the specifications below:

Source (ASCII (Delimited))

location Nul:

SourceOptions

Page 153: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 153

header True

Source Schema

Record R1

Name Type Length Description

Account Number Text 9

New Street Text 34

Total 43

Target (ODBC 3.x)

Database TrainingDB

table tblAccounts

Outputmode Update

Key Field AccountNumber

Update Mode Options Update ALL matching records and ignore non-matching records.

Map Expressions

R1.AccountNumber Records("R1").Fields("Account Number")

R1.Street Records("R1").Fields("New Street")

Variables

Name Type Public Value

myPath Variant yes

myCurrentFile Variant yes

Page 154: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 154

myFileCounter Variant yes -1

MapEvents

BeforeTransformation ChangeSource

source name Source

connection string "+File=" & myCurrentFile

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

10. Save the map as “UpdateAddresses” and close Map Designer.

11. Use the same session as was created in the first Transformation Step.

12. Name this step “UpdateAdds”.

13. Put in a Decision step as described below:

SuccessCheck (Decision)

Project("UpdateAdds").ReturnCode == 0

Predecessors

UpdateAdds Unconditional

14. Put in a scripting step as described below:

FileCounter (Scripting)

' Decrement the file counter variable

myFileCounter = myFileCounter - 1

Predecessors

SuccessCheck True

15. Put in a scripting step as described below:

Page 155: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 155

Notification_UpdateFailure (Scripting)

MsgBox("Update Address Map Failed")

Predecessors

SuccessCheck False

16. Connect the steps as in the screen shot above the exercise instructions.

17. Validate the process.

18. Save it as “FileListLoop”.

19. Run it and note the results.

Page 156: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 156

Integration Engine

Integration Engine is an embedded data Transformation engine used to deploy runtime data replication, migration and Transformation jobs on Windows or UNIX-based platforms quickly and easily without costly custom programming. It fills the need for a low-cost, universal data transformation engine.

The Integration Engine is a 32-bit data transformation engine written in C++, containing the core data driver modules that are the foundation for the transformation architecture. Because the Integration Engine is a pure execution engine with no user interface components, it can perform automatic, runtime data transformations quickly and easily, making it ideal for environments where regular data transformations need to be scheduled and launched on Windows or UNIX-based systems.

Page 157: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 157

Syntax: Version Information

Objectives

This lesson shows how to retrieve version and licensing information from the engine via the command line interface.

Keywords: “djengine” Executable and Version Informa tion

Exercise

1. Open a command window by typing “cmd” in the Windows Run dialog.

2. Navigate to directory where Cosmos is installed

3. Default directory is: C:\Program Files\Pervasive\Cosmos\Common800

4. To get the current engine version information, type: djengine –version

Page 158: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 158

Options and Switches

Objectives

This lesson shows how to get the usage syntax and all options, or switches, available through the command line interface of Integration engine.

Keywords: Syntax and Option Overrides

Exercise

View the different options and parameters available for executing transformations and processes by using the “-?” switch.

To see all the available options, at the command prompt type: djengine –help

Page 159: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 159

Page 160: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 160

Execute A Transformation

Objectives

This lesson shows how to execute a Transformation Map via the command line interface.

Keywords: Executing a Map

Description

At the command prompt type: djengine MapName.tf.xml

Tip: You can drag and drop transformation file name from a Windows Explorer window to command line

Add –verbose at end of command to get statistics printed to the console during runtime.

At the command prompt type: djengine C:\Cosmos_Work\ Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTest.tf.xml -verbose

Page 161: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 161

Using a “-Macro_File” Option

Objectives

At the end of this lesson you should be able to utilize a Macro Definition file for porting Maps and Processes from one Integration Engine installation to another.

Keywords: Macro Definition, Macro Manager and Macro File

Description

There are two ways to use a Macro to define a connection on the command line.

The –Macro_File command shows the path to the Macrodef.xml file (the default location for that file is in the Workspace1 folder) that holds the values of all the macros that we have defined. If you have multiple macros defined, this may be your preferred method.

At the command prompt type: djengine -Macro_file C:\Cosmos_Work\Workspace1\macrodef.xml C:\Cosmos_Work\ Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTestwithMacro.tf.xml -verbose

The –Define_Macro command allows us to define individual Macros on the command line.

At the command prompt type: djengine -Define_Macro Data=C:\Cosmos_Work\ Fundamentals\Data\ C:\Cosmos_Work\ Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTestwithMacro.tf.xml -verbose

Page 162: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 162

Command Line Overrides – Source Connection

Keywords: Dynamic Override for Source File

Let's substitute a different source file from the original file defined in the Transformation to show how overrides can be performed at execution time. The syntax of the command is:

djengine -Source_Connect_Info string (include path)

At the command prompt type:

djengine -Source_Connect_Info C:\Cosmos_Work\ Fundamentals\Data\AccountsSmall.txt C:\Cosmos_Work\ Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTestwithMacro.tf.xml -verbose

AccountsSmall.txt is a file that has the same format as Accounts.txt , but it only has 54 records.

Note that only 54 records were written. Note also that we did not need to define the Macro or the path to the Macro File. The Macro in the map was only used in the source connection and we defined a new source with a complete path. So the Macro was no longer relevant.

Page 163: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 163

Ease of Use: Options File

Keywords: Using Text Editor for Command Line Option s

Type command from previous step (leave out the first word, DJEngine) in Textpad and save as Options.bas in Cosmos root directory.

This is called an Optfile

Type the command djengine @Options.bas (If you did not save it in the Cosmos root directory, you’ll have to include the path of where you saved Options.bas file)

At the command prompt type: DJEngine @Options.bas

Page 164: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 164

Executing a Process

Keywords: Using the Process Design Option

Command syntax is djengine -process_execute file name (include path)

At the command prompt type: djengine -Macro_File C:\Cosmos_Work\Workspace1\macrodef.xml C:\Cosmos_Wo rk\ Fundamentals\Solutions\ProcessDesigner_DataIntegrat or\CreatingAProcess.ip.xml –verbose

Note that we had to use the Macro_File command because some of the Maps in the process had a Macro as part of the source connection.

Page 165: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 165

Using the “-Set” Variable Option

Objectives

At the end of this lesson you should be able give a variable a value from the command line.

Keywords: -Set

Description

In the Solutions\MapDesigner_TransformationFundamentals folder there is a transformation that has a msgbox that displays the value of a variable.

First let’s run the map without changing the value of the variable.

At the command prompt type: djengine C:\Cosmos_Work\ Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTestwithVar.tf.xml

Click OK on the MsgBox pop up.

Note that without the –Verbose command the only command line indication that the Map ran correctly is a single line, “Return Code : 0”

Now let’s change the value of the variable.

For a string with a single word, type at the command prompt:

djengine -se myVar=\"NewValue\" C:\Cosmos_Work\ Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTestwithVar.tf.xml

Click OK on the MsgBox pop up.

Page 166: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 166

For a string with multiple words, type at the command prompt:

djengine -se “myVar=\"New Value\"” C:\Cosmos_Work\Fundamentals\Solutions\IntegrationEngine_CommandLine\EngineTestwithVar.tf.xml

Click OK on the MsgBox pop up.

Additional notes:

Aside from normal command line quoting/escaping sequences for the given operating system, what is to the right of the equals sign will be used verbatim in an expression to set the variable.

On windows, the only command line quote character is the double quote, and it is escaped using a backslash. By using -se gblsStartDate='07-09-1976' you are causing the expression gblsStartDate = '07-09-1976' to be executed, which of course does nothing since the single quote indicates the start of a comment.

By using -se gblsStartDate=07-09-1976 you are causing the expression gblsStartDate = (07 – 09) – 1976 to be executed. If you use -se gblsStartDate="07-09-1976" you will get the same results as above (as if the quotes weren't present).

However, if you use -se gblsStartDate=\"07-09-1976\" the expression gblsStartDate = "07-09-1976" will be executed, which is what you want.

Note that this also means you can do something like -se gblsStartDate=now() and have gblsStartDate = now() executed.

Page 167: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 167

Scheduling Executions

Keywords: Using NT Task Scheduler

To set up a Windows task:

Go to Programs>Accessories>System Tools>Scheduled Tasks>A dd Scheduled Task

The Wizard will walk you through the steps.

Page 168: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 168

Mapping Techniques

This section explores the capabilities of Transformation Map Designer in more detail.

Page 169: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 169

Multiple Record Type Structures

Page 170: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 170

Multiple Record Type – 1 One-to-Many

Objectives

At the end of this lesson you should be able to create a target file that has multiple record types from a source file that has a single record type. You should be able to deal also with the situation in which some of the data is repeated from source record to source record, but only one target record should be written for it. You will also become more familiar with the OnDataChange event.

Keywords: OnDataChange Event, Data Change Event, Pa rse and Format functions

Description

When you want to create a multiple-record-type target file from a single-record-type source, there are two possible scenarios. In the simplest, you want to break down each source record into n-different target records. You might want to take source fields 1-5 from the source and put them in target record “A” and take source fields 6-10 and put them in target record “B.” To perform this task, you define your target record types as you learned in an earlier lesson and then, in an AfterEveryRecord event, just perform two “ClearMapPut” actions- one for each target record type.

Continuing our previous example, a more complex situation occurs when you don’t necessarily want to create both target records “A” and “B” from each source record. Let’s assume that the source file contains customer information in fields 1-5 and sales information in fields 6-10. We’ll assume that the source is in order by field 1- the customer number. In this case, we only want to write a target record “A” when we encounter a new customer, although we want to write a target record “B” for each source record.

To perform this task, we’ll just bring in our OnDataChange event (which we’ve learned about in an earlier lesson). For our solution, we’ll set up the event to monitor the customer number. Each time it changes (including the change from an empty source buffer to the valid value from the very first record), we will write out a target record “A.” We’ll use our AfterEveryRecord event to write out target record “B.”

Keep in mind that this transformation is assuming a single target file with multiple record types. This is very similar to a target with a single database with two tables, and in this latter situation different techniques would be used, though the events will be very similar.

In this exercise, we have a source that records that have employee information and information about the cars they drive. If an employee has more than one car, there is one source record for every car. Thus the employee information is redundantly written when any employee has more than one car. In our target we want to eliminate the redundancy. We will write one target record for each employee and we will write one “child” record of a different record type for each “auto”.

So if “E” is employee and “A” is auto, our source looks something like this:

E1,A1

E1,A2

E1,A3

E2,A1

E2,A2

And we want the target to look like this:

E1

Page 171: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 171

A1

A2

A3

E2

A1

A2

To duplicate this in your real world situations, the trick is to know where to put your ClearMapPut event handlers that write target records. You make that decision by knowing what is in the source buffer or buffers. (The Source Buffer is the internal object that stores the values that have just been read in from a source record. There is one buffer for each source record type.) And you need to understand what you need in the target. When the source buffers have all the data that you need is when you write a target record.

One more thing: As a general rule, you need one action (ClearMapPut is most common) that writes a target record per every target record type.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the One_to_ManyRecordTypes transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Autos_Sorted.txt

SourceOptions

header True

Target (ASCII (Fixed))

location $(funData)Autos_MultiRecType.txt

outputmode Replace

Page 172: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 172

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout Auto

buffered false

OnDataChange1 ClearMapPut Record

target name Target

record layout Employee

buffered false

Data Change Monitors

recordlayout name R1

Records("R1").Fields("Initials")

Target Schema

Record Employee

Name Type Length Description

RecordID Text 1

Initials Text 2

Phone Text 10

City Text 9

State Text 2

Total 24

Page 173: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 173

Record Auto

Name Type Length Description

RecordID Text 1

Initials Text 2

Year Text 4

Make Text 10

Color Text 5

Total 22

Map Expressions

Employee.RecordID "E"

Employee.Initials Records("R1").Fields("Initials")

Employee.Phone Records("R1").Fields("Phone")

Employee.City Records("R1").Fields("City")

Employee.State Records("R1").Fields("State")

Auto.RecordID "A"

Auto.Initials Records("R1").Fields("Initials")

Auto.Year Records("R1").Fields("Year")

Auto.Make Records("R1").Fields("Make")

Auto.Color Records("R1").Fields("Color")

Page 174: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 174

Multiple Record Type – 2 Many-to-One

Objectives

At the end of this lesson you should be able to work with a multiple record type source file and create a single record type target file. You’ll gain an understanding of the special nature of the source buffer for these transformations, and the relationship of the various event handlers to their individual record types.

Keywords: Multi to one record type, and multiple re cord layouts

Description

When you specify a source file that contains multiple record types (either by applying an existing multiple-record-type structured schema to it or by creating a new multiple-record-type structured schema) you will find that the Source Hierarchy on the Map Tab will display the individual record types and give you access to the individual fields for each record type.

To create a single-record-type target file, you map whatever fields you want from whatever record types they exist in down to the target field list- just as you do in any other Map Design. The Map Designer will take care of precisely identifying each field with not only its name but also with the record type from which it came.

One key to working with multiple-record-type source files is an understanding of how the source buffer is structured. When the source file specifies multiple record types, your transformation will automatically set up a large source buffer that contains a “section” for each different record type. Each such “section,” of course, contains a space for each field defined in that record type. As your transformation reads the source file, it uses the structured schema and the recognition rules to identify the record type for each record. Once the record type is identified, it is placed into its proper section of the source buffer.

The other key to working with multiple-record-type source files is to use the right event handler at the right time. Each source record type has its own set of event handlers. For example, you may perform a set of actions each time a record of a particular type is read, or after the first occurrence of each record type is read and so on. (Using the General Event Handlers, you can also perform actions globally for all record types.) If your target layout is going to contain fields from three different record types, you will not want to attempt to write a target record until the source buffer sections for all three of those record types have been filled.

If we assume that the source file always contains all three record types for each object (e.g., customer, account, sale), and if we know that the order is always 1-2-3, then we can simply use the AfterEveryRecord event for record type 3 to write out a target record. The situation is a bit more complex if some record types might not exist for a given object yet we want to write a target record with the data we do have. Assuming the same order restriction, if record type 2 is the one that is missing, then the problem is that data from a previous record type 2 may still be present in the source buffer, and we may have to clear that section of the source buffer ourselves. But if record type 3 is missing we have a different problem. Our AfterEveryRecord event will not be triggered and no target record will be written. We can solve this problem, too, but it will take a bit of work.

Similar problems exist if the sequence of records in the source file can change from object to object, but again, these problems can be solved if we understand the operation of the source buffer, use the right event handlers at the right time and perhaps also clear sections of the source buffer ourselves when necessary.

Page 175: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 175

With this exercise we’ll take the file that we created in the last exercise and change it back into the format it had before. So here we have this:

E1

A1

A2

A3

E2

A1

A2

And we want this in our target:

E1,A1

E1,A2

E1,A3

E2,A1

E2,A2

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the Many_to_OneRecordType transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)SrcAutosRecordType.txt

Structured Schema

originallocation xmldb:ref:///C:/Cosmos_Work/Fundamentals/Solutions/MapDesigner_MappingTechniques

schemaname AutosRecordType.ss.xml

Page 176: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 176

Auto Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

buffered false

Target (ASCII (Delimited))

location $(funData)AutosCombined.txt

TargetOptions

header True

Target Schema

Record R1

Name Type Length Description

Initials Text 2

Phone Text 10

City Text 9

State Text 2

Year Text 4

Make Text 10

Color Text 5

Total 42

Map Expressions

R1.Initials Records("Employee").Fields("Initials")

Page 177: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 177

R1.Phone Records("Employee").Fields("Phone")

R1.City Records("Employee").Fields("City")

R1.State Records("Employee").Fields("State")

R1.Year Records("Auto").Fields("Year")

R1.Make Records("Auto").Fields("Make")

R1.Color Records("Auto").Fields("Color")

Page 178: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 178

User Defined Functions

The Rapid Integration Flow Language (RIFL) encompasses functions, statements and keywords that are used in Source/Target Filters, Target Field Expressions, and Code Modules. You will recognize VBScript and Visual Basic functions and some SQL Statements. Map Designer also employs many unique functions, which were designed to help you get the most out of your data.

One of the powerful features of this language is the ability to abstract and reuse scripts in the form of User-Defined Functions. These functions can be stored and edited in a text file (code module) in a centralized location and all of your Maps have access to them.

Page 179: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 179

Code Reuse – Save/Open a RIFL script Code Modules

Objectives

At the end of this lesson you should be able to save and reopen an extract script.

Keywords: RIFL Script Editor

Description

The first level of code reusability is simply to save a script to file. You will need to make any necessary changes when you reopen it in a different map but the script is still intact.

Exercise

Simply open any RIFL Script in the Editor window and click the Save button on the toolbar. This saves a text file with a RIFL extension somewhere on your network.

To reuse the script, click the Open Folder toolbar button in another Script editor window. You will need to manually change any parameters for use in the new Script window.

Next, we will show you how to make the functions more flexible by abstracting them into User Defined Functions and storing them in Code Modules.

Page 180: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 180

Code Reuse - Code Modules

Objectives

At the end of this lesson you should be able to call a user-defined function from a code module.

Keywords: User Defined Functions, Code Modules, and RIFL Script Editor

Description

You may call user-defined functions from an external Code Module in Map Designer. Code modules may be saved as text-only files with a RIFL (Rapid Integration and Flow Language) file extension. Expressions may be written using the RIFL expression language, and saved with a RIFL extension.

External code modules can be moved to any other machine with the Map Designer or Integration Engine without a problem. This will allow you to develop a user-defined "library" for use among different members of your team.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the CodeReuse_UserDefinedFunction transformation in the Solutions folder:

Code Module : $(funData)Scripts\ZipCodeLogic.rifl

Source (ASCII (Delimited))

location $(funData)Accounts.txt

Page 181: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 181

SourceOptions

header True

Target (ASCII (Delimited))

location $(funData) ZipCodeTEST.txt

TargetOptions

header True

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Target Schema

Record R1

Name Type Length Description

Account Number Text 9

Zip Text 10

ZipReport Text 25

Total 44

Map Expressions

R1.Account Number Records("R1").Fields("Account Number")

R1.Zip Records("R1").Fields("Zip")

R1.ZipReport zipTest(Records("R1").Fields("Zip"))

Page 182: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 182

Lookup Wizards

Lookup Wizards automate the process of creating lookups for your Transformations. You name the lookup or select an existing lookup to be edited, browse to files or tables to automatically build connection strings and select the key and returned fields. At the end of each Lookup Wizard, a reusable code module is created in your workspace containing the functions you need for doing lookups. The Code Module files generated by these wizards can then be reused in any Map you create.

There are three types of Lookup methodologies and each has their advantages in certain situations:

Static Flat File Lookups are fast but not very portable or dynamic.

Dynamic SQL Lookups are portable and dynamic but not very fast.

Incore Table Lookups are extremely fast and can be made more dynamic with extra RIFL code but they use core memory to store the data.

Page 183: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 183

Flat File Lookup

Keywords: Lookup Wizard, Count & Counter Variable parameter s, One-to-Many records (unrolling occurrences), and re ferencing Target Field values

Description

Flat File Lookups allow us to look up data from a file that is not our source. We reference this data with a key value that does come from the source and returns matching data or a default value if no matches are found. The Lookup Function Wizard allows us to build these customized functions and store them in a code module.

We will also be unrolling a data field that contains multiple values. The Favorites categories are all stored in one field with a pipe delimiter separating them. We will create a unique target record for each of the values stored in a single source record. The Count and Counter Variable parameters of the ClearMapPut action can be used to parse this field and unroll the records dynamically.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the FlatFileLookup transformation in the Solutions folder:

Code Module: $(funData)Scripts\myCategories.flatfile.rifl

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ODBC 3.x)

Database TrainingDB

table tblFavoriteinfo

Page 184: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 184

Outputmode Clear File/Table contents and Append

Source R1 Events

AfterEveryRecord ClearMapPut Record

target

name

Target

record

layout

R1

count ' Evaluate source field to determine how many occurrences of data exist.

' This is translated to the number of child records written to the Favorites table.

CharCount("|",Records("R1").Fields("Favorites")) + 1

counter

variable

myFavoritesCounter

buffered false

Map Expressions

R1.FavoritesID Serial()

R1.Account

Number

Records("R1").Fields("Account Number")

R1.CategoryCode parse(myFavoritesCounter, Records("R1").Fields("Favorites"), "|")

R1.CategoryLiteral myCategories_Field2_Lookup(Targets(0).Records("R1").Fields("CategoryCode"),

"NoMatches")

R1.ProductManager myCategories_Field3_Lookup(Targets(0).Records("R1").Fields("CategoryCode"),

"NoManagers")

Page 185: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 185

Dynamic SQL Lookup

Keywords: Lookup Wizard, Dynamic SQL Lookup, Count & Counte r Variable parameters, One-to-Many records (unrolling occurrences), and referencing Target Field values

Description

Dynamic SQL Lookups allow us to look up values from other sources when that source is a relational table or view. Again we will use the Lookup Function Wizard to create User Defined Functions that are stored in a code module.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the DSQLLookup transformation in the Solutions folder:

Code Module: $(funData)Scripts\myCategories.dynsql.rifl

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ODBC 3.x)

Database TrainingDB

table tblFavoriteinfo

Outputmode Clear File/Table contents and Append

Page 186: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 186

Variables

Name Type Public Value

CatImp DJImport yes

MapEvents

BeforeTransformation Execute

expression 'Initialize the DJImport object myCategories_Init()

AfterTransformation Execute

expression myCategories_Terminate()

Source R1 Events

AfterEveryRecord ClearMapPut Record

target

name

Target

record

layout

R1

count ' Evaluate source field to determine how many occurrences of data exist.

' This is translated to the number of child records written to the Favorites table.

CharCount("|",Records("R1").Fields("Favorites")) + 1

counter

variable

myFavoritesCounter

buffered false

Map Expressions

R1.FavoritesID Serial()

R1.Account Number Records("R1").Fields("Account Number")

R1.CategoryCode parse(myFavoritesCounter, Records("R1").Fields("Favorites"), "|")

R1.CategoryLiteral myCategories_Category_Lookup _

(Targets(0).Records("R1").Fields("CategoryCode"), "NoMatches")

R1.ProductManager myCategories_ProductManager_Lookup _

Page 187: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 187

(Targets(0).Records("R1").Fields("CategoryCode"), "NoManagers")

Page 188: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 188

Incore Table Lookup

Keywords: Lookup Wizard, Incore Memory Table & Lookup, Coun t & Counter Variable parameters, One-to-Many records (unrolling occurrences), and referencing Target Field values

Description

An Incore memory table lookup can be utilized when speed is of the utmost importance. The primary method of creating the incore table is to make use of a DJImport object much the same as we did with the Dynamic SQL lookup. However, you will take the record set returned by the Select statement and store it in a memory table. The memory table will then be accessed to perform the lookup.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the InCoreLookup transformation in the Solutions folder:

Code Module: $(funData)Scripts\myCategories.itable.rifl

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ODBC 3.x)

Database TrainingDB

table tblFavoriteinfo

Outputmode Clear File/Table contents and Append

Page 189: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 189

MapEvents

BeforeTransformation Execute

expression myCategories_Init()

AfterTransformation Execute

expression myCategories_WriteToFile("$(funData)myCategoriesFile.txt", "|")

Source R1 Events

AfterEveryRecord ClearMapPut Record

target

name

Target

record

layout

R1

count ' Evaluate source field to determine how many occurrences of data exist.

' This is translated to the number of child records written to the Favorites table.

CharCount("|",Records("R1").Fields("Favorites")) + 1

counter

variable

myFavoritesCounter

buffered false

Map Expressions

R1.FavoritesID Serial()

R1.Account Number Records("R1").Fields("Account Number")

R1.CategoryCode parse(myFavoritesCounter, Records("R1").Fields("Favorites"), "|")

R1.CategoryLiteral myCategories_Category_Lookup _

(Targets(0).Records("R1").Fields("CategoryCode"), "NoMatches")

R1.ProductManager myCategories_ProductManager_Lookup _

(Targets(0).Records("R1").Fields("CategoryCode"), "NoManagers")

Page 190: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 190

RDBMS Mapping

Page 191: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 191

Select Statements – SQL Passthrough

Keywords: SQL Select Statements

The Transformation Map designer source connectors allow for passing Select statements through to a database server to obtain a row set. The resultant row set that is returned by the query then becomes the source data for your Map.

Alternatively, you can use the SQL script that generates this source record set by using the SQL File connection option and pointing to the matching SQL Script file in the Data folder.

This exercise creates a simple transformation that takes only the records from Texas and puts them into our target.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the SelectStatements transformation in the Solutions folder:

Source (ODBC 3.x)

Database TrainingDB

SQLStatement Select * from tblAccounts where State = 'TX'

Target (ASCII (Delimited))

location $(funData)TXAccounts.txt

TargetOptions

header True

Outputmode Replace

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

Page 192: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 192

record layout R1

Map Expressions

R1.AccountNumber Fields("AccountNumber")

R1.Name Fields("Name")

R1.Company Fields("Company")

R1.Street Fields("Street")

R1.City Fields("City")

R1.State Fields("State")

R1.Zip Fields("Zip")

R1.Email Fields("Email")

R1.BirthDate Fields("BirthDate")

R1.Favorites Fields("Favorites")

R1.StandardPayment Fields("StandardPayment")

R1.LastPayment Fields("LastPayment")

R1.Balance Fields("Balance")

Page 193: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 193

Integration Querybuilder

Objectives

At the end of this lesson you should be able to extract data from one or more tables in the same database by using a SQL Passthrough statement.

Keywords: Integration Query Builder, SQL Passthroug h Statements

Description

The Transformation Map designer source connectors allow for passing Select statements through to a database server to obtain a row set. The resultant row set that is returned by the query then becomes the source data for your Map.

Use the Integration Query Builder to generate the source record set. Alternatively, you can use the SQL script that generates this source record set by using the SQL File connection option and pointing to the matching SQL Script file in the Scripts folder.

When you choose an RDBMS source connector, an additional choice appears on the Source tab. You can now choose whether you want to point directly to a table or view, pass a SQL statement through, or point to a SQL script file that already has a SQL statement in it.

We will construct our own using the query builder.

Exercise

Once you have connected to a data source, (described below) your connection is displayed in the upper-right pane. You can set up and save as many data source connections as you need. Integration Querybuilder stores all connections you create unless you explicitly delete them.

1. Double-click the connection you want to use. The DB Browser in the lower-right pane will display the database.

2. Click the database icon to display the icons for tables, views and procedures for this database. Clicking on these will display their contents. Click on the individual tables to list their columns, or right-click and select Get Details from the shortcut menu to see the SQL representation of column values such as length, data types and whether they are used as primary or secondary keys.

3. To create a query, select New Query from the Query menu. A new query icon will be opened beneath the connection icon in the upper-right pane. You can rename this now or later by Integration Querybuilder Right-click on the icon.

4. Drag the tables and views you want to use into the upper-left pane. This is called the Relations pane. As you drag tables into this pane, you will see that SELECT... FROM statements are created in the SQL pane. If tables are already linked in the database, these links will be displayed, although these can be changed or removed for the purpose of this particular query.

If you are using a table more than once, the second and further copies will be renamed. For example, if you already have a Customer table in the Relations pane and you drag across another copy, it will be automatically renamed Customer1.

Page 194: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 194

The Select statement that is generated becomes part of the connection string and it is passed through to the database server.

We can now map this data into any target type and format we desire.

There follows some information taken from reports generated by Repository Manager from the RDBMS_SelectStatements transformation in the Solutions folder:

Source (ODBC 3.x)

Database TrainingDB

SQLStatement

SELECT

srcAccounts.[Account Number],

srcAccounts.Name,

srcAccounts.Company,

srcAccounts.Street,

srcAccounts.City,

srcAccounts.State,

srcAccounts.Zip,

Page 195: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 195

srcPurchases.PONumber,

srcPurchases.Category,

srcPurchases.ProductNumber,

srcPurchases.ShipmentMethodCode

FROM

(srcAccounts

RIGHT JOIN srcPurchases ON

srcAccounts.[Account Number] = srcPurchases.AccountNumber)

ORDER BY

srcPurchases.ShipmentMethodCode,

srcAccounts.City

Target (ASCII (Delimited))

location $(funData)Purchases_SQLSelect.txt

TargetOptions

header True

Outputmode Replace

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.Account Number Fields("Account Number")

R1.Name Fields("Name")

R1.Company Fields("Company")

Page 196: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 196

R1.Street Fields("Street")

R1.City Fields("City")

R1.State Fields("State")

R1.Zip Fields("Zip")

R1.PONumber Fields("PONumber")

R1.Category Fields("Category")

R1.ProductNumber Fields("ProductNumber")

R1.ShipmentMethodCode Fields("ShipmentMethodCode")

Page 197: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 197

DJX in Select Statements – Dynamic Row sets

Keywords: Integration Query Builder, DJX Syntax, an d Dynamic Row Sets via User Interaction, InputBox

Description

With DJX, you escape into the RIFL (Rapid Integration and Flow Language) expression language where you can design SQL statements dynamically. This allows you to pull values into an SQL Statement. For instance, if you wanted only the records that were entered into a table yesterday, you could use the RIFL “Date” function to return the current system date and the “Dateadd” function to subtract one day. Then make that value part of your select statement via the DJX.

This exercise will pull in the records from the tblAccounts table that are from a particular state. That state will be passed into the SQL select statement at runtime.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the DJXSelectStatement transformation in the Solutions folder:

Source (ODBC 3.x)

Database TrainingDB

SQLStatement Select * from tblAccounts where State = 'DJX(varState)'

Target (HTML)

location $(funData)AccountsbyState.htm

TargetOptions

index False

mode table

tableborder True

Page 198: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 198

Outputmode Replace

Variables

Name Type Public Value

varState Variant no

MapEvents

BeforeTransformation Execute

expression varState = InputBox("Enter the two letter code for the State", "State Input",

"TX")

Source R1 Events

AfterEveryRecord ClearMapPut Record

target name Target

record layout R1

Map Expressions

R1.AccountNumber Fields("AccountNumber")

R1.Name Fields("Name")

R1.Company Fields("Company")

R1.Street Fields("Street")

R1.City Fields("City")

R1.State Fields("State")

R1.Zip Fields("Zip")

R1.Email Fields("Email")

R1.BirthDate Fields("BirthDate")

R1.Favorites Fields("Favorites")

Page 199: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 199

R1.StandardPayment Fields("StandardPayment")

R1.LastPayment Fields("LastPayment")

R1.Balance Fields("Balance")

Page 200: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 200

Multimode Introduction

Keywords: Multimode Functionality, Insert Action, a nd Count Parameter

Multimode is a functionality that allows us to write to more than one table in the same database within the same Transformation.

The Account Numbers in the Accounts.txt file all start with either “01” or “02”. The ones that start with “01” are trading partners. We want to set up a Transformation that will run those records into our tblTradingPartners table in the TrainingDB Database. The records that start with “02” are individual customers and we want them to go into the tblIndividuals table.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the Mulitmode_Introduction transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ODBC 3.x MultiMode)

Database TrainingDB

MapEvents

BeforeTransformation Drop Table

target name Target

table name tblIndividuals

BeforeTransformation Drop Table

Page 201: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 201

target name Target

table name tblTradingPartners

BeforeTransformation Create Table

target name Target

record layout Individuals

table name tblIndividuals

BeforeTransformation Create Table

target name Target

record layout TradingPartners

table name tblTradingPartners

Source R1 Events

AfterEveryRecord ClearMapInsert Record

target name Target

record layout Individuals

table name tblIndividuals

count 'Evals customer code and sets Count to 1 if Individual

If Left(Records("R1").Fields("Account Number"), 2) = "02" Then

1

Else

0

End if

AfterEveryRecord ClearMapInsert Record

target name Target

record layout TradingPartners

table name tblTradingPartners

count 'Evals customer code and sets Count to 1 if Individual

Page 202: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 202

If Left(Records("R1").Fields("Account Number"), 2) = "01" Then

1

Else

0

End if

Target Schema

Record Individuals

Name Type Length Description

Account Number CHAR 9

Name CHAR 21

Street CHAR 35

City CHAR 16

State CHAR 2

Zip CHAR 10

Email CHAR 25

Birth Date DATE 4

Favorites CHAR 11

Standard Payment CURRENCY 20

Payments CURRENCY 20

Balance CURRENCY 20

Total 193

Record TradingPartners

Name Type Length Description

Account Number CHAR 9

Name CHAR 21

Company CHAR 31

Page 203: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 203

Street CHAR 35

City CHAR 16

State CHAR 2

Zip CHAR 10

Email CHAR 25

Standard Payment CURRENCY 20

Payments CURRENCY 20

Balance CURRENCY 20

Total 209

Map Expressions

Individuals.Account Number Records("R1").Fields("Account Number")

Individuals.Name Records("R1").Fields("Name")

Individuals.Street Records("R1").Fields("Street")

Individuals.City Records("R1").Fields("City")

Individuals.State Records("R1").Fields("State")

Individuals.Zip Records("R1").Fields("Zip")

Individuals.Email Records("R1").Fields("Email")

Individuals.Birth Date DatevalMask(Records("R1").Fields("Birth Date"), "mm/dd/yyyy")

Individuals.Favorites Records("R1").Fields("Favorites")

Individuals.Standard Payment Records("R1").Fields("Standard Payment")

Individuals.Payments Records("R1").Fields("Payments")

Individuals.Balance Records("R1").Fields("Balance")

Map Expressions

TradingPartners.Account Number Records("R1").Fields("Account Number")

Page 204: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 204

TradingPartners.Name Records("R1").Fields("Name")

TradingPartners.Company Records("R1").Fields("Company")

TradingPartners.Street Records("R1").Fields("Street")

TradingPartners.City Records("R1").Fields("City")

TradingPartners.State Records("R1").Fields("State")

TradingPartners.Zip Records("R1").Fields("Zip")

TradingPartners.Email Records("R1").Fields("Email")

TradingPartners.Standard Payment Records("R1").Fields("Standard Payment")

TradingPartners.Payments Records("R1").Fields("Payments")

TradingPartners.Balance Records("R1").Fields("Balance")

Page 205: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 205

Multimode – Data Normalization

Keywords: Comprehensive exercise, Create Unique Ind exes (Action Keys), Primary & Surrogate keys, On Error & On Constraint Error event handling.

The Map Designer has a rich set of Event Handlers with predefined Actions that can make quick work of complex mapping problems. In this exercise we will normalize data from Accounts.txt as we load it directly to the target database. A single record will be written to three different target tables and in the case of the “Favorites” column, we will write one-to-many records again. As we map to the three different tables, we need to map foreign keys and generate primary keys so we will be able to relate the data downstream. We can also “de-dupe” the data by placing unique indexes on the load tables and checking constraints as we insert rows. Finally, we will utilize more of the target Event Handlers to catch exception records. However, in this case we will not use the Reject Connection Info functionality. We will insert exception records to our target database and add our own text for the reject reason at the same time.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the Multimode_DataNormalization transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ODBC 3.x MultiMode)

Database TrainingDB

Variables

Name Type Public Value

rejectReason Variant no "NoReasonAtAll"

Page 206: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 206

MapEvents

BeforeTransformation Drop Table

target name Target

table name tblEntity

BeforeTransformation Drop Table

target name Target

table name tblFavorites

BeforeTransformation Drop Table

target name Target

table name tblPayments

BeforeTransformation Drop Table

target name Target

table name tblRejects

BeforeTransformation Create Table

target name Target

record layout Entity

table name tblEntity

BeforeTransformation Create Table

target name Target

record layout Favorites

table name tblFavorites

BeforeTransformation Create Table

target name Target

Page 207: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 207

record layout Payments

table name tblPayments

BeforeTransformation Create Table

target name Target

record layout Rejects

table name tblRejects

BeforeTransformation Create Index

target name Target

record layout Entity

table name tblEntity

index name idxEntity

unique True

BeforeTransformation Create Index

target name Target

record layout Favorites

table name tblFavorites

index name idxFavorites

unique True

BeforeTransformation Create Index

target name Target

record layout Payments

table name tblPayments

index name idxPayments

unique false

Page 208: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 208

BeforeTransformation Create Index

target name Target

record layout Rejects

table name tblRejects

index name idxRejects

unique false

Source R1 Events

AfterEveryRecord ClearMapInsert Record

target name Target

record layout Entity

table name tblEntity

AfterEveryRecord ClearMapInsert Record

target name Target

record layout Favorites

table name tblFavorites

count CharCount("|", Records("R1").Fields("Favorites")) + 1

counter variable cntFavorites

AfterEveryRecord ClearMapInsert Record

target name Target

record layout Payments

table name tblPayments

Target Events

OnConstraintError Execute

Page 209: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 209

expression rejectReason = "General-OnConstraint"

OnConstraintError ClearMapInsert Record

target name Target

record layout Rejects

table name tblRejects

OnConstraintError Resume

OnError Execute

expression rejectReason = "General-OnError event"

OnError ClearMapInsert Record

target name Target

record layout Rejects

table name tblRejects

OnError Resume

Map Expressions

Entity.Account Number Records("R1").Fields("Account Number")

Entity.Name Records("R1").Fields("Name")

Entity.Company Records("R1").Fields("Company")

Entity.Street Records("R1").Fields("Street")

Entity.City Records("R1").Fields("City")

Entity.State Records("R1").Fields("State")

Entity.Zip Records("R1").Fields("Zip")

Entity.Email Records("R1").Fields("Email")

Entity.Birth Date DateValMask(Records("R1").Fields("Birth Date"), "mm/dd/yyyy")

Page 210: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 210

Map Expressions

Favorites.Account

Number

Records("R1").Fields("Account Number")

Favorites.FavoritesID Serial(0) ' Starts at 1 each execution. Consider using a lookup to get

Max Value first.

Favorites.Favorites Parse(cntFavorites, Records("R1").Fields("Favorites"), "|")

Map Expressions

Payments.Account

Number

Records("R1").Fields("Account Number")

Payments.PaymentID Serial(0) 'Starts at one each execution. Consider using lookup for Max

Value

Payments.Payments Records("R1").Fields("Payments")

Payments.Balance Records("R1").Fields("Balance")

Map Expressions

Rejects.Account Number Records("R1").Fields("Account Number")

Rejects.RejectID Serial(0) 'Starts at one each execution. Consider using lookup for Max

Value

Rejects.RejectReason ' Use to set your own error message

rejectReason

Rejects.Name Records("R1").Fields("Name")

Rejects.Company Records("R1").Fields("Company")

Rejects.Street Records("R1").Fields("Street")

Rejects.City Records("R1").Fields("City")

Rejects.State Records("R1").Fields("State")

Rejects.Zip Records("R1").Fields("Zip")

Rejects.Email Records("R1").Fields("Email")

Rejects.Birth Date Records("R1").Fields("Birth Date")

Page 211: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 211

Rejects.Favorites Records("R1").Fields("Favorites")

Rejects.Standard

Payment

Records("R1").Fields("Standard Payment")

Rejects.Payments Records("R1").Fields("Payments")

Rejects.Balance Records("R1").Fields("Balance")

Page 212: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 212

Mulitmode Implementation with Upsert Action

Objectives

At the end of this lesson you should be able to use the Upsert Action.

Keywords: Multimode, Change Source, Upsert

Description

The Upsert Action is used by Multimode Connectors only. This Action updates records where there is a key match and Inserts records where there is not.

This Map uses Multimode to load two tables then a Change Source Action to a second file. Then it uses the Upsert Action to either Insert or Update to the tables.

Exercise

1. Create our map based on the specifications given below.

2. Run the map and observe the result.

There follows some information taken from reports generated by Repository Manager from the Mulitmode_Implementation_withUpsertAction transformation in the Solutions folder:

Source (ASCII (Delimited))

location $(funData)Accounts.txt

SourceOptions

header True

Target (ODBC 3.x MultiMode)

Database TrainingDB

Variables

Name Type Public Value

varUpsertFlag Variant no 0

Page 213: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 213

Map Events

BeforeTransformation Drop Table

target name Target

table name tblIndividuals

BeforeTransformation Drop Table

target name Target

table name tblTradingPartners

BeforeTransformation Create Table

target name Target

record layout Individuals

table name tblIndividuals

BeforeTransformation Create Table

target name Target

record layout TradingPartners

table name tblTradingPartners

Source R1 Events

AfterEveryRecord ClearMapInsert Record

target name Target

record layout Individuals

table name tblIndividuals

count 'Checks the Upsert Flag

If varUpsertFlag = 0 then

'Evals customer code and sets Count to 1 if Individual

If Left(Records("R1").Fields("Account Number"), 2) = "02" Then

1

End If

Page 214: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 214

Else

0

End if

AfterEveryRecord ClearMapInsert Record

target name Target

record layout TradingPartners

table name tblTradingPartners

count 'Checks the Upsert Flag

If varUpsertFlag = 0 then

'Evals customer code and sets Count to 1 if Trading Partner

If Left(Records("R1").Fields("Account Number"), 2) = "01" Then

1

End If

Else

0

End if

AfterEveryRecord ClearMap

target name Target

record layout Individuals

count If varUpsertFlag = 1 then 1 end if

AfterEveryRecord ClearMap

target name Target

record layout TradingPartners

count If varUpsertFlag = 1 then 1 end if

AfterEveryRecord Upsert Record

target name Target

record layout Individuals

table name tblIndividuals

Page 215: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 215

count 'Checks the Upsert Flag

If varUpsertFlag = 1 then

'Evals customer code and sets Count to 1 if Individual

If Left(Records("R1").Fields("Account Number"), 2) = "02" Then

1

End If

Else

0

End if

AfterEveryRecord Upsert Record

target name Target

record layout TradingPartners

table name tblTradingPartners

count 'Checks the Upsert Flag

If varUpsertFlag = 1 then

'Evals customer code and sets Count to 1 if Trading Partner

If Left(Records("R1").Fields("Account Number"), 2) = "01" Then

1

End If

Else

0

End if

Source Events

OnEOF ChangeSource

source name Source

connection string If varUpsertFlag = 0 then

varUpsertFlag = 1

"+File=$(funData)AccountsUpdate.txt"

End if

Map Expressions

Page 216: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 216

Individuals.Account Number Records("R1").Fields("Account Number")

Individuals.Name Records("R1").Fields("Name")

Individuals.Street Records("R1").Fields("Street")

Individuals.City Records("R1").Fields("City")

Individuals.State Records("R1").Fields("State")

Individuals.Zip Records("R1").Fields("Zip")

Individuals.Email Records("R1").Fields("Email")

Individuals.Birth Date DatevalMask(Records("R1").Fields("Birth Date"), "mm/dd/yyyy")

Individuals.Favorites Records("R1").Fields("Favorites")

Individuals.Standard Payment Records("R1").Fields("Standard Payment")

Individuals.Payments Records("R1").Fields("Payments")

Individuals.Balance Records("R1").Fields("Balance")

Map Expressions

TradingPartners.Account Number Records("R1").Fields("Account Number")

TradingPartners.Name Records("R1").Fields("Name")

TradingPartners.Company Records("R1").Fields("Company")

TradingPartners.Street Records("R1").Fields("Street")

TradingPartners.City Records("R1").Fields("City")

TradingPartners.State Records("R1").Fields("State")

TradingPartners.Zip Records("R1").Fields("Zip")

TradingPartners.Email Records("R1").Fields("Email")

TradingPartners.Standard Payment Records("R1").Fields("Standard Payment")

TradingPartners.Payments Records("R1").Fields("Payments")

TradingPartners.Balance Records("R1").Fields("Balance")

Page 217: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 217

Management Tools

The Pervasive Integration Platform has several tools that perform various tasks that are convenient for users who are working with our main applications.

Page 218: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 218

Upgrade Utility

This Upgrade Utility allows you to update existing Transformations created in previous versions of Map Designer to the current version of Map Designer 8.x.

Page 219: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 219

Upgrading Maps from Prior Versions

Objectives

At the end of this lesson you should be able to update existing Transformations created in previous versions of Map Designer to the current version of Map Designer 8.x.

Keywords: DJ750.mdb, Access Repository, .djs

Description

How to Use the Upgrade Utility How to Use the Upgrade Utility How to Use the Upgrade Utility How to Use the Upgrade Utility

1. Open the Upgrade Utility from the Start Menu ... Utilities > Upgrade Utility. The following dialog will open.

To Upgrade from v7.xx

1. In the MDB Database box, type in the location of the Transformation you wish to convert from, including the complete path. Or you may click the Find button and navigate to the correct location, then click Open.

Page 220: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 220

To Upgrade to v8.x

1. In the Workspace box, type in the name of the Workspace you wish to save the new Transformation. Or you may click on the Change button and access the Workspace Manager to select your Workspace, then click OK.

2. In the Repository URL box, type in the location of the new files, including the complete patch. Or you may click on the Change button and select the correct Repository, then click OK.

To Start Upgrade

1. Click the Start Upgrade button to run the upgrade. The Upgrade Utility first converts the Record Layouts, then the Maps, and last, the Processes, in succession. Or if you need to stop the upgrade, you may click the Abort Upgrade button during the conversion. The step that is currently running when you select Abort Upgrade will complete, and then the upgrade will abort.

You can view the upgrade process in the Upgrade Status section. The Upgrade Status section will display Completed! after the Upgrade Utility finishes upgrading the Record Layouts, Maps and Processes.

1. Click Done to exit the program.

Limitation

When you upgrade a Transformation, the version number of the map gets reset to 1.0 (as if you were adding a new map to the database). You will need to change the version information in the Transformation and Map Properties dialog.

Page 221: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 221

Engine Profiler

The Engine Profiler is a tool designed to fine tune your Transformations and Processes.

There is an excellent document available at

C:\Program Files\Pervasive\Cosmos\Common800\Help\engine_profiler.pdf

This document goes into detail of the functionality and use of the Engine Profiler.

Page 222: Pervasive Data Integrator

Pervasive Integration Platform Training - End User 222

Data Profiler

Data Profiler is a data quality analysis tool.

There is an excellent document available at: C:\Program Files\Pervasive\Cosmos\Common800\Help\ data_profiler.pdf. This document goes into detail of the functionality and use of the Data Profiler.

There is also a class offering available for this tool available from the Pervasive Training Department.