26
Oracle GoldenGate B Heartbeat Table for M Document ID1299679.1 Last update: 15-Dec-14 Steven George Consulting Solution Architect Fusion Middleware Architects Team: Best Practices: Monitoring Lag times : The A-Team

Oracle GoldenGate Best Practices - Heartbeat Table for Monitoring Lag Times v11.3 ID1299679.1

Embed Size (px)

DESCRIPTION

GoldenGate best practices

Citation preview

Oracle GoldenGate Best Practices: Heartbeat Table for Monitoring Lag times

Document ID1299679.1

Last update: 15-Dec-14

Steven George

Consulting Solution Architect Fusion Middleware Architects Team:

Oracle GoldenGate Best Practices: Heartbeat Table for Monitoring Lag times

Architects Team: The A-Team

ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT TABLE FOR MONITO

Disclaimer

This sample code is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test

Proofread this sample code before using it! Due to the differences in the way text editors, epackages and operating systems handle text formatting (spaces, tabs and carriage returns), this sample code may not be in an executable state when you first receive it. Check over the sample code to ensure that errors of this type are corrected.

This document touches briefly on many important and complex concepts and does not provide a detailed explanation of any one topic since the intent is to present the material in the most expedient manner. The goal is simply to help the reader become familiar enough with the product to successfully design and implement an Oracleis important to note that the activities of design, unit testing and integration testing which are crucial to a successful implementation have been intentionally left out of the guide. sample scripts are provided as is. Oracle consulting service is highlcustomized implementation.

EAT TABLE FOR MONITORING LAG TIMES

This sample code is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee that it will work for you, so be sure to test it in your environment before relying on it.

Proofread this sample code before using it! Due to the differences in the way text editors, epackages and operating systems handle text formatting (spaces, tabs and carriage returns), this

not be in an executable state when you first receive it. Check over the sample code to ensure that errors of this type are corrected.

This document touches briefly on many important and complex concepts and does not provide a ne topic since the intent is to present the material in the most

expedient manner. The goal is simply to help the reader become familiar enough with the product to successfully design and implement an Oracle GoldenGate environment. To that end, it

tant to note that the activities of design, unit testing and integration testing which are crucial to a successful implementation have been intentionally left out of the guide. All the

Oracle consulting service is highly recommended for any

This sample code is provided for educational purposes only and not supported by Oracle Support Services. It has been tested internally, however, and works as documented. We do not guarantee

Proofread this sample code before using it! Due to the differences in the way text editors, e-mail packages and operating systems handle text formatting (spaces, tabs and carriage returns), this

not be in an executable state when you first receive it. Check over the sample

This document touches briefly on many important and complex concepts and does not provide a ne topic since the intent is to present the material in the most

environment. To that end, it tant to note that the activities of design, unit testing and integration testing which are

All the y recommended for any

ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT TABLE FOR MONITO

Table of Contents

Disclaimer

Introduction

Configuration

Overview

Source and Target users

Heartbeat tables – Source and Target

Heartbeat tables – Target

Updating of the heartbeat table

Extract Configuration

Data Pump Configuration

Replicat Configuration

Conclusion

Parameter and Sql Scripts – Examples

Troubleshooting

EAT TABLE FOR MONITORING LAG TIMES

Source and Target

Examples

1

1

1

1

2

2

4

5

7

8

9

10

10

21

1 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT TABLE FOR MONITO

Introduction

This document is intended to help you implement a heartbeat process that can be used to determine

where and when lag is developing between a source and target system.

This document will walk you thru the step-

table mapping statements needed to keep track of processing times between a source and target

database. Once the information is added into the data flow, the i

target tables where it can then be analyzed to determine when and when the lag is being introduced

between the source and target systems.

By comparing commit and current timestamps you

each of processes.

These tables will allow you to:

» Create a history of lag to determine what time of day lag develops.

» Create a history to identify if lag is increasing over time.

» Identify if an upstream process is stopped for any reason.

» Monitor DML and DDL statistics

Configuration

Overview

GoldenGate heartbeat implementation requires the following application and database modifications.

» Add a Heartbeat table to the Source GoldenGate database schema

» Add a Heartbeat status table and a history table to the T

» Add mapping statements to each process

» Create a DBMS scheduler job on the source database to update the heartbeat table.

The source and target heartbeat tables are all the same structure. In the source system, the heartbeat is a single

row update of the timestamp. The extract process will extract the updated information and add a couple of tokens to

the record. This information is written in the trail for the following process to read and add additional information to

the record as it passes through the data pump and t

On the target system the first record will be inserted into the heartbeat table and again inserted into a history table.

Following records are then updated in the status table and inserted into the history table. When the record is

inserted, additional information will be added to the columns in the table via tokens. A trigger is added to the

heartbeat tables to automatically calculate the lag for each record. For this example, the target will include

information from the source, information on the data pump, if used, and target replicat information.

The following information will help you determine where and when the lag time is developing.

EAT TABLE FOR MONITORING LAG TIMES

This document is intended to help you implement a heartbeat process that can be used to determine

where and when lag is developing between a source and target system.

-by-step process of creating the necessary tables and added

table mapping statements needed to keep track of processing times between a source and target

database. Once the information is added into the data flow, the information is then stored into the

be analyzed to determine when and when the lag is being introduced

By comparing commit and current timestamps you can identify when and where lag is developing in

Create a history of lag to determine what time of day lag develops.

Create a history to identify if lag is increasing over time.

upstream process is stopped for any reason.

e heartbeat implementation requires the following application and database modifications.

Add a Heartbeat table to the Source GoldenGate database schema

ble and a history table to the Target GoldenGate database schema

Create a DBMS scheduler job on the source database to update the heartbeat table.

The source and target heartbeat tables are all the same structure. In the source system, the heartbeat is a single

extract process will extract the updated information and add a couple of tokens to

the record. This information is written in the trail for the following process to read and add additional information to

the record as it passes through the data pump and then into the target.

On the target system the first record will be inserted into the heartbeat table and again inserted into a history table.

Following records are then updated in the status table and inserted into the history table. When the record is

inserted, additional information will be added to the columns in the table via tokens. A trigger is added to the

heartbeat tables to automatically calculate the lag for each record. For this example, the target will include

formation on the data pump, if used, and target replicat information.

The following information will help you determine where and when the lag time is developing.

This document is intended to help you implement a heartbeat process that can be used to determine

ing the necessary tables and added

table mapping statements needed to keep track of processing times between a source and target

nformation is then stored into the

be analyzed to determine when and when the lag is being introduced

identify when and where lag is developing in

The source and target heartbeat tables are all the same structure. In the source system, the heartbeat is a single

extract process will extract the updated information and add a couple of tokens to

the record. This information is written in the trail for the following process to read and add additional information to

On the target system the first record will be inserted into the heartbeat table and again inserted into a history table.

Following records are then updated in the status table and inserted into the history table. When the record is

2 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

Source and Target users

Both the source and target will need a schema in order to create the

that you use the GoldenGate user/schema that you have already set up as part of the base GoldenGate

configuration. Note: if this is a bidirectional configuration where the GoldenGate user is excluded, you will

create the source heartbeat table in a different schema. The target tables can be under the GoldenGate schema.

On Source system

SQL> create user source identified by ggs;SQL> grant connect, resource, dba to source;

One Target system

SQL> create user target identified by ggs;SQL> grant connect, resource, dba to target;

Heartbeat tables – Source and Target

In this configuration we are using three heartbeat tables, HEARTBEAT, GGS_HEARTBEAT and

GGS_HEARTBEAT_HISTORY. All of the tables have the s

The HEARTBEAT table may have more than one row in it, depending on the number of threads (RAC) and the only

columns that are updated are the update timestamp and the source DB name. O

GGS_HEARTBEAT table contains the current (last update) heartbeat for all of the replicats that are mapping into the

heartbeat. The last table is the GGS_HEARTBEAT_HISTORY table.

All of heartbeat records are inserted into the history table. Thi

period of time. In a production environment you may want to partition this table to ease maintenance of this table.

THIS IS THE LAYOUT FOR ALL OF THE HEARTBEAT TABLES.

Column

ID

SRC_DB

Extract_name

Source_commit

Target_commit

CAPTIME

TABLE FOR MONITORING LAG TIMES

Both the source and target will need a schema in order to create the source and target tables. It is recommended

that you use the GoldenGate user/schema that you have already set up as part of the base GoldenGate

configuration. Note: if this is a bidirectional configuration where the GoldenGate user is excluded, you will need to

create the source heartbeat table in a different schema. The target tables can be under the GoldenGate schema.

SQL> create user source identified by ggs; SQL> grant connect, resource, dba to source;

te user target identified by ggs; SQL> grant connect, resource, dba to target;

In this configuration we are using three heartbeat tables, HEARTBEAT, GGS_HEARTBEAT and

GGS_HEARTBEAT_HISTORY. All of the tables have the same column layout but are used for different information.

The HEARTBEAT table may have more than one row in it, depending on the number of threads (RAC) and the only

tamp and the source DB name. On the target system the

GGS_HEARTBEAT table contains the current (last update) heartbeat for all of the replicats that are mapping into the

GGS_HEARTBEAT_HISTORY table.

All of heartbeat records are inserted into the history table. This table can be used to chart lag for a process over a

period of time. In a production environment you may want to partition this table to ease maintenance of this table.

EAT TABLES.

Contents

Sequence number

Source Database Name

Name of the extract process from token.

Source commit timestamp from header.

When the record was added to the target. This is

updated by the trigger.

Added as a token using DATANOW() function.

source and target tables. It is recommended

need to

create the source heartbeat table in a different schema. The target tables can be under the GoldenGate schema.

ame column layout but are used for different information.

The HEARTBEAT table may have more than one row in it, depending on the number of threads (RAC) and the only

GGS_HEARTBEAT table contains the current (last update) heartbeat for all of the replicats that are mapping into the

s table can be used to chart lag for a process over a

period of time. In a production environment you may want to partition this table to ease maintenance of this table.

3 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

CAPLAG

PMPTIME

PMPGROUP

PMPLAG

DELTIME

DELGROUP

DELLAG

TOTALLAG

Thread

Update_timestamp

EDDLDELTASTATS

EDMLDELTASTATS

RDDLDELTASTATS

RDMLDELTASTATS

To create the tables you will need to run a sqlplus script to create the heartbeat tables and add a row to the

heartbeat table –

SQL> @<path>/heartbeat_tables_v11.sql

The next step is to add trandata to the source heartbeat table using the GGSCI command interface

GGSCI> ADD TRANDATA SOURCE.HEARTBEAT

TABLE FOR MONITORING LAG TIMES

Capture time – commit time on the source. This is

updated by the trigger

When token timestamp was added to trail in the data

pump using DATENOW()

Data pump group name.

Capture time minus the time that the record was passed

thru the data pump. Value is calculated by the update

trigger.

This is added as part of the map statement using

DATENOW().

Name of the replicat group.

The defference between the time the record was pass

thry the data pump and when the record was inserted

into the target table. This is calculated in the trigger.

The difference between the target commit time and the

source commit time. This is updated via the trigger.

Thread number from instance.

The system time of the update into the HB table.

DDL operations since the last gathering (Delta) of stats

on the extract process.

DML operations since the last gathering (Delta) of stats

on the extract process.

DDL operations since the last gathering (Delta) of stats

on the replicat process.

DML operations since the last gathering (Delta) of stats

on the replicat process.

sqlplus script to create the heartbeat tables and add a row to the

1.sql

The next step is to add trandata to the source heartbeat table using the GGSCI command interface –

TBEAT

commit time on the source. This is

When token timestamp was added to trail in the data

that the record was passed

thru the data pump. Value is calculated by the update

The defference between the time the record was passed

thry the data pump and when the record was inserted

into the target table. This is calculated in the trigger.

The difference between the target commit time and the

source commit time. This is updated via the trigger.

DDL operations since the last gathering (Delta) of stats

DML operations since the last gathering (Delta) of stats

DDL operations since the last gathering (Delta) of stats

DML operations since the last gathering (Delta) of stats

4 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

Heartbeat tables – Target

As the heartbeat data flows through the different process, each process adds tokens to the record so that you can

track the lag time from the source system to the target. The first table, GGS_HEARTBEAT, has only one record

each replicat process that is feeding into it. If you have ten replicats you should have ten rows in this table. The

main idea of this table is that if you want to look at the current lag times of your replicats, you only need to look at

one small table. The second table is the GGS_HEARTBEAT_HISTORY table. As its name in implies, it is a history

of all of the heartbeats. This can be used to determine if and when you have had lag in the past. Because this is an

"insert all records" table, it is a good idea to partition the table so that you

A trigger on the target heartbeat tables does the calculations when the record is inserted or updated on the target.

Lag times expressed are in microseconds.

The configuration of the triggers is dependent on the version of GoldenGate. Prior to 12c,

NOSUPPRESSTRIGGERS was the default. In 12c the SUPPRESSTRIGGES in now default.

If you are using SUPRESSTRIGERS you will need to exclude the heartbeat tables trigger otherwise you find

columns populated by the trigger will be blank. Th

grant the trigger an exception to the parameter. The

SQL> EXEC ’dbms_ddl.set_trigger_firing_property(trigger_owner "trigger_name",FALSE);’

To use [NO]SUPPRESSTRIGGERS, the Replicat user must have the privileges granted through the

dbms_goldengate_auth.grant_admin_privilege package. This procedure is part of the Oracle database

See the database documentation for more information.

NOTE: The system clocks across all system must be synchronized. If not, then the calculated lag times will be

inaccurate. If you see negative times, check the clocks.

CREATE OR REPLACE TRIGGER GGS_HEARTBEAT_TRIGBEFORE INSERT OR UPDATE ON GGS_HEARTBEATFOR EACH ROW BEGIN select seq_ggs_HEARTBEAT_id.nextvalinto :NEW.ID from dual; select systimestamp into :NEW.target_COMMIT from dual; select runk(to_number(substr((:NEW.CAPTIME instr(:NEW.CAPTIME - :NEW.SOURCE_COMMIT,' ')))) * 86400+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600 + to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60 + to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+7,2)) + to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000into :NEW.CAPLAG from dual; select runk(to_number(substr((:NEW.PMPTIME :NEW.CAPTIME,' ')))) * 86400 + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+1,2)) * 3600

TABLE FOR MONITORING LAG TIMES

As the heartbeat data flows through the different process, each process adds tokens to the record so that you can

track the lag time from the source system to the target. The first table, GGS_HEARTBEAT, has only one record

each replicat process that is feeding into it. If you have ten replicats you should have ten rows in this table. The

main idea of this table is that if you want to look at the current lag times of your replicats, you only need to look at

ble. The second table is the GGS_HEARTBEAT_HISTORY table. As its name in implies, it is a history

of all of the heartbeats. This can be used to determine if and when you have had lag in the past. Because this is an

good idea to partition the table so that you can manage its size over time.

A trigger on the target heartbeat tables does the calculations when the record is inserted or updated on the target.

f the triggers is dependent on the version of GoldenGate. Prior to 12c,

NOSUPPRESSTRIGGERS was the default. In 12c the SUPPRESSTRIGGES in now default.

If you are using SUPRESSTRIGERS you will need to exclude the heartbeat tables trigger otherwise you find

y the trigger will be blank. The way to exclude the trigger from the SUPRESSTRIGERS is

exception to the parameter. The way to do that is to execute the following command in SQL

et_trigger_firing_property(trigger_owner "trigger_name",

To use [NO]SUPPRESSTRIGGERS, the Replicat user must have the privileges granted through the

dbms_goldengate_auth.grant_admin_privilege package. This procedure is part of the Oracle database installation.

See the database documentation for more information.

NOTE: The system clocks across all system must be synchronized. If not, then the calculated lag times will be

inaccurate. If you see negative times, check the clocks.

RIGGER GGS_HEARTBEAT_TRIG BEFORE INSERT OR UPDATE ON GGS_HEARTBEAT

select seq_ggs_HEARTBEAT_id.nextval

select runk(to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT ),1, :NEW.SOURCE_COMMIT,' ')))) * 86400

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME -

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - ' ')+10,6)) / 1000000

select runk(to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),1, instr(:NEW.PMPTIME

:NEW.CAPTIME), instr((:NEW.PMPTIME -

As the heartbeat data flows through the different process, each process adds tokens to the record so that you can

track the lag time from the source system to the target. The first table, GGS_HEARTBEAT, has only one record for

each replicat process that is feeding into it. If you have ten replicats you should have ten rows in this table. The

main idea of this table is that if you want to look at the current lag times of your replicats, you only need to look at

ble. The second table is the GGS_HEARTBEAT_HISTORY table. As its name in implies, it is a history

of all of the heartbeats. This can be used to determine if and when you have had lag in the past. Because this is an

A trigger on the target heartbeat tables does the calculations when the record is inserted or updated on the target.

that the

m the SUPRESSTRIGERS is to

way to do that is to execute the following command in SQL –

installation.

NOTE: The system clocks across all system must be synchronized. If not, then the calculated lag times will be

:NEW.CAPTIME),1, instr(:NEW.PMPTIME -

5 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+4,2) ) * 60 + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+7,2)) + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000into :NEW.PMPLAG from dual; select runk(to_number(substr((:NEW.DELTIME :NEW.PMPTIME,' ')))) * 86400 + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+1,2)) * 3600 + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+4,2) ) * 60 + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+7,2)) + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000into :NEW.DELLAG from dual; select runk(to_number(substr((:NEW.TARGET_COMMIT instr(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMM+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600 + to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+7,2)) + to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')into :NEW.TOTALLAG from dual; end ; / ALTER TRIGGER "GGS_HEARTBEAT_TRIG" ENABLE;

Updating of the heartbeat table

The heartbeat table is updated via a stored procedure that it executed by a D

This is the DBMS_SCHEDULER command to create the job. It may be easier to create the job using a tool like

OEM or SQL Developer. The key to using DBMS_SCHEDULER is the ability to repeat the task at a predefined

interval. In this example "repeat_interval" is set to e

SQL> @HB_DBMS_SCHEDULER.sql -- connect / as sysdba accept ogg_user prompt 'GoldenGate User name:' grant select on v_$instance to &&ogg_user; grant select on v_$database to &&ogg_user; BEGIN SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '&&o defer => false, force => false);END;

TABLE FOR MONITORING LAG TIMES

:NEW.CAPTIME), instr((:NEW.PMPTIME -

:NEW.CAPTIME), instr((:NEW.PMPTIME -

:NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000

select runk(to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),1, instr(:NEW.DELTIME

:NEW.PMPTIME), instr((:NEW.DELTIME -

:NEW.PMPTIME), instr((:NEW.DELTIME -

:NEW.PMPTIME), instr((:NEW.DELTIME -

:NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000

select runk(to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),1, :NEW.SOURCE_COMMIT,' ')))) * 86400

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+7,2))

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000

ALTER TRIGGER "GGS_HEARTBEAT_TRIG" ENABLE;

The heartbeat table is updated via a stored procedure that it executed by a DBMS_JOB using the scheduler.

BMS_SCHEDULER command to create the job. It may be easier to create the job using a tool like

OEM or SQL Developer. The key to using DBMS_SCHEDULER is the ability to repeat the task at a predefined

erval" is set to every minute.

accept ogg_user prompt 'GoldenGate User name:'

grant select on v_$instance to &&ogg_user;

grant select on v_$database to &&ogg_user;

SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '&&ogg_user..OGG_HB', defer => false, force => false);

:NEW.PMPTIME),1, instr(:NEW.DELTIME -

BMS_JOB using the scheduler.

BMS_SCHEDULER command to create the job. It may be easier to create the job using a tool like

OEM or SQL Developer. The key to using DBMS_SCHEDULER is the ability to repeat the task at a predefined

6 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

/ CREATE OR REPLACE PROCEDURE &&ogg_user..gg_update_hb_tab ISv_thread_num NUMBER; v_db_unique_name VARCHAR2 (128); BEGIN SELECT db_unique_name INTO v_db_unique_name FROM v$database; UPDATE &&ogg_user..heartbeat SET update_timestamp = SYSTIMESTAMP,src_db = v_db_unique_name; COMMIT; END; / BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB ( job_name => '&&ogg_user..OGG_HB', job_type => 'STORED_PROCEDURE', job_action => '&&ogg_user..GG_UPDATE_HB_TAB',number_of_arguments => 0, start_date => NULL, repeat_interval => 'FREQ=MINUTELY',end_date => NULL, job_class => '"SYS"."DEFAULT_JOB_CLASS"',enabled => FALSE, auto_drop => FALSE, comments => 'GoldenGate', credential_name => NULL, destination_name => NULL); SYS.DBMS_SCHEDULER.SET_ATTRIBUTE( name => '&&ogg_user..OGG_HB', attribute => 'restartable', value => TRUE); SYS.DBMS_SCHEDULER.SET_ATTRIBUTE( name => '&&ogg_user..OGG_HB', attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF); SYS.DBMS_SCHEDULER.enable( name => '&&ogg_user..OGG_HB'); END; /

You can use the following sql to check the status of the job

col REPEAT_INTERVAL format a15 col NEXT_RUN_DATE format a38 col OWNER format a10 col JOB_NAME format a8 set linesize 120 select owner, job_name, job_class, enabled, next_run_date, repeat_interval

TABLE FOR MONITORING LAG TIMES

CREATE OR REPLACE PROCEDURE &&ogg_user..gg_update_hb_tab IS

SET update_timestamp = SYSTIMESTAMP

job_action => '&&ogg_user..GG_UPDATE_HB_TAB',

repeat_interval => 'FREQ=MINUTELY',

job_class => '"SYS"."DEFAULT_JOB_CLASS"',

attribute => 'restartable', value => TRUE);

_level', value => DBMS_SCHEDULER.LOGGING_OFF);

You can use the following sql to check the status of the job –

7 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

from dba_scheduler_jobs where owner = decode(upper('&&ogg_user'), 'ALL', owner, upper('&&ogg_user'));

Extract Configuration

In the extract parameter file a TABLE statement will need to be added in order to capture the update to the

heartbeat table. Along with the update, a couple of tokens also need to be added in order to tell which extract and

host the data originated from.

In this example you would add the extract using the following commands

ADD EXTRACT ext_hb, TRANLOG, BEGIN NOW,ADD EXTTRAIL ./dirdat/<Primary_trail

Where you would substitute <threads> for the number of threads (RAC) and

want to use.

Note: Token variables have changed in 12c. Replace double quotes (") with singl

Here is the include file with the heartbeat map stat

Include file –

./dirprm/HB_Extract.inc -- HB_Extract.inc -- Heartbeat Table -- update 9-1-12 SGEORGE – no checkpoint info.TABLE <source schema>.HEARTBEAT, TOKENS ( CAPGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"),CAPTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),EDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"),EDMLDELTASTATS = @GETENV ("DELTASTATS", "DML"));

This is an example of a complete extract parameter file with the include file for the hea

EXTRACT ext_hb SETENV (ORACLE_SID=ora11g) -- Use USERID to specify the type of database authentication for GoldenGate to use.USERID source password ggs EXTTRAIL ./dirdat/db -- Use DISCARDFILE to generate a discard file to which Extract or Replic-- records that it cannot process. GoldenGate creates the specified discard file in-- the dirrpt sub-directory of the GoldenGate installation directory. You can use the -- discard file for problem-solving.DISCARDFILE ./dirrpt/ext_hb.dsc, APPEN-- Use REPORTCOUNT to generate a count of records that have been processed since-- the Extract or Replicat process started-- REPORTCOUNT [EVERY] <count> {RECORDS | SECONDS | MINUTES | HOURS} [, RATE]REPORTCOUNT EVERY 5 MINUTES, RATE -- Use FETCHOPTIONS to control certain aspects of the way that GoldenGate fetchesFETCHOPTIONS, NOUSESNAPSHOT, NOUSELATESTVERSION, MISSINGROW REPORT-- Use STATOPTIONS to specify information to be included in statistical displays

TABLE FOR MONITORING LAG TIMES

owner = decode(upper('&&ogg_user'), 'ALL', owner, upper('&&ogg_user'))

In the extract parameter file a TABLE statement will need to be added in order to capture the update to the

heartbeat table. Along with the update, a couple of tokens also need to be added in order to tell which extract and

In this example you would add the extract using the following commands –

ADD EXTRACT ext_hb, TRANLOG, BEGIN NOW, threads <threads> Primary_trail>, EXTRACT ext_hb, MEGABYTES 100

he number of threads (RAC) and <Primary_trail> for the trail name you

Note: Token variables have changed in 12c. Replace double quotes (") with single (') quotes in the tokens.

Here is the include file with the heartbeat map statement for the heartbeat table –

no checkpoint info.

CAPGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"), DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

EDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"), EDMLDELTASTATS = @GETENV ("DELTASTATS", "DML")

example of a complete extract parameter file with the include file for the heartbeat –

Use USERID to specify the type of database authentication for GoldenGate to use.

Use DISCARDFILE to generate a discard file to which Extract or Replicat can logrecords that it cannot process. GoldenGate creates the specified discard file in

directory of the GoldenGate installation directory. You can use

solving. DISCARDFILE ./dirrpt/ext_hb.dsc, APPEND

Use REPORTCOUNT to generate a count of records that have been processed sincethe Extract or Replicat process started REPORTCOUNT [EVERY] <count> {RECORDS | SECONDS | MINUTES | HOURS} [, RATE]

S to control certain aspects of the way that GoldenGate fetchesFETCHOPTIONS, NOUSESNAPSHOT, NOUSELATESTVERSION, MISSINGROW REPORT

Use STATOPTIONS to specify information to be included in statistical displays

heartbeat table. Along with the update, a couple of tokens also need to be added in order to tell which extract and

he trail name you

DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

Use USERID to specify the type of database authentication for GoldenGate to use.

at can log records that it cannot process. GoldenGate creates the specified discard file in

directory of the GoldenGate installation directory. You can use

Use REPORTCOUNT to generate a count of records that have been processed since

S to control certain aspects of the way that GoldenGate fetches

Use STATOPTIONS to specify information to be included in statistical displays

8 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

-- generated by the STATS EXTRACT or STATS RSTATOPTIONS REPORTFETCH -- This is the Heartbeat table include dirprm/HB_Extract.inc -- The implementation of this parameter varies depending on the process.-- TABLE source.*;

Data Pump Configuration

In the Data Pump parameter file you will need to include a TABLE statement so that the pump name and current

timestamp are added to the record as the record is passed through the data pump. If you are doing DDL replication,

you will need to add PASSTHRU for the tables that are using DDL and

Note: It is best to have the heartbeat table in a different schema than your application. If you wildcard the schema

and have a TABLE statement for the heartbeat table, you will end up with duplicat

Here are the commands to add the datapump and remote trail

ADD EXTRACT pmp_hb, BEGIN NOW, EXTTRAILSOURCE ./dirdat/ADD RMTTRAIL ./dirdat/>Target_trail

You will need to substitute <Primary_trail< and

The include file for the heartbeat in the data pump is as follows:

./dirprm/HB_pmp.inc -- HB_pmp.inc -- Heartbeat Table table <source schema>.heartbeat, TOKENS ( PMPGROUP = @GETENV ("GGENVIRONMENT","GROUPNAME"), PMPTIME = @DATE ("YYYY-("JULIANTIMESTAMP")) );

Example of a complete Data Pump parameter file

-- Data Pump configuration file -- last update -- 11-3-08 SGEORGE -- update 9-1-12 SGEORGE – no checkpoint info.-- extract PMP_hb -- Database login info userid <ogg_user>, password <ogg_pass -- Just in case we can't process a record we'll dump info herediscardfile ./dirrpt/PMP_hb.dsc, append -- Remote host and remort manager port to write trailrmthost <target_host>, mgrport <Target_mgr> -- This is the Trail to where we outputrmttrail ./dirdat/<Target_trail> -- Heartbeat include dirprm/HB_pmp.inc Table source_schema.*;

TABLE FOR MONITORING LAG TIMES

generated by the STATS EXTRACT or STATS REPLICAT command.

The implementation of this parameter varies depending on the process.

will need to include a TABLE statement so that the pump name and current

timestamp are added to the record as the record is passed through the data pump. If you are doing DDL replication,

you will need to add PASSTHRU for the tables that are using DDL and NOPASSTHRU for the heartbeat table.

Note: It is best to have the heartbeat table in a different schema than your application. If you wildcard the schema

and have a TABLE statement for the heartbeat table, you will end up with duplicate records in the output trail.

Here are the commands to add the datapump and remote trail –

ADD EXTRACT pmp_hb, BEGIN NOW, EXTTRAILSOURCE ./dirdat/>Primary_trail> Target_trail<, EXTRACT pmp_hb, MEGABYTES 100

and <Target_trail> for the trail names you want to use.

The include file for the heartbeat in the data pump is as follows:

("GGENVIRONMENT","GROUPNAME"), -MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV

Example of a complete Data Pump parameter file –

no checkpoint info.

ogg_pass>

Just in case we can't process a record we'll dump info here discardfile ./dirrpt/PMP_hb.dsc, append

manager port to write trail rmthost <target_host>, mgrport <Target_mgr>

This is the Trail to where we output

will need to include a TABLE statement so that the pump name and current

timestamp are added to the record as the record is passed through the data pump. If you are doing DDL replication,

NOPASSTHRU for the heartbeat table.

Note: It is best to have the heartbeat table in a different schema than your application. If you wildcard the schema

tput trail.

9 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

Replicat Configuration

The replicat will need to have the heartbeat table added to the map statements along with the token mapping.

When the replicat inserts the row into the table a “before insert” trigger will fire and update the val

GGS_HEARTBEAT table.

There are two heartbeat tables, the first is the heartbeat table that has the current heartbeat information. It will have

only one row for each replicat. The second table is the history table that contains all of the heartbeats records. This

table can be used to graph the lag time in each replicat end to end.

As with the extract and data pump, we are adding data to the record when we insert the row i

table.

Here is an example of the command to add the replicat

ADD REPLICAT REP_HB exttrail ./dirdat/<Rep_PR_Tra

This is the include file for the Map statement:

./dirprm/HB_Rep.inc dirprm/HB_Rep.inc -- Heartbeat table MAP <source schema>.HEARTBEAT, TARGET <target schema>.GGS_HEARTBEAT,KEYCOLS (DELGROUP), INSERTMISSINGUPDATES, COLMAP (USEDEFAULTS, ID = 0, SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"),EXTRACT_NAME = @TOKEN ("CAPGROUP"),CAPTIME = @TOKEN ("CAPTIME"), PMPGROUP = @TOKEN ("PMPGROUP"), PMPTIME = @TOKEN ("PMPTIME"), DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"),DELTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),EDDLDELTASTATS = @TOKEN ("EDDLDELTASTATS"),EDMLDELTASTATS = @TOKEN ("EDMLDELTASTATS"),RDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"),RDMLDELTASTATS = @GETENV ("DELTASTATS", "DML")); MAP <source schema>.HEARTBEAT, TARGET <target schema>.GGS_HEARTBEAT_HISTORY,KEYCOLS (ID), INSERTALLRECORDS, COLMAP (USEDEFAULTS, ID = 0, SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"),EXTRACT_NAME = @TOKEN ("CAPGROUP"),CAPTIME = @TOKEN ("CAPTIME"), PMPGROUP = @TOKEN ("PMPGROUP"), PMPTIME = @TOKEN ("PMPTIME"), DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"),DELTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),EDDLDELTASTATS = @TOKEN ("EDDLDELTASTATS"),EDMLDELTASTATS = @TOKEN ("EDMLDELTASTATS"),RDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"),RDMLDELTASTATS = @GETENV ("DELTASTATS", "DML"));

TABLE FOR MONITORING LAG TIMES

d to have the heartbeat table added to the map statements along with the token mapping.

When the replicat inserts the row into the table a “before insert” trigger will fire and update the values in the

the first is the heartbeat table that has the current heartbeat information. It will have

only one row for each replicat. The second table is the history table that contains all of the heartbeats records. This

ach replicat end to end.

As with the extract and data pump, we are adding data to the record when we insert the row into the target heartbeat

ommand to add the replicat –

REP_HB exttrail ./dirdat/<Rep_PR_Trail> nodbcheckpoint

MAP <source schema>.HEARTBEAT, TARGET <target schema>.GGS_HEARTBEAT,

SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), EXTRACT_NAME = @TOKEN ("CAPGROUP"),

DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"), DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

EDDLDELTASTATS = @TOKEN ("EDDLDELTASTATS"), EDMLDELTASTATS = @TOKEN ("EDMLDELTASTATS"), RDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"), RDMLDELTASTATS = @GETENV ("DELTASTATS", "DML")

MAP <source schema>.HEARTBEAT, TARGET <target schema>.GGS_HEARTBEAT_HISTORY,

SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), EXTRACT_NAME = @TOKEN ("CAPGROUP"),

DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"), DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

EDDLDELTASTATS = @TOKEN ("EDDLDELTASTATS"), TASTATS = @TOKEN ("EDMLDELTASTATS"),

RDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"), RDMLDELTASTATS = @GETENV ("DELTASTATS", "DML")

d to have the heartbeat table added to the map statements along with the token mapping.

the first is the heartbeat table that has the current heartbeat information. It will have

only one row for each replicat. The second table is the history table that contains all of the heartbeats records. This

nto the target heartbeat

DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

10 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

This is an example of a complete Replicat parameter file:

replicat rep_hb -- Use ASSUMETARGETDEFS when the source and t-- statement have the same column structure, such as when synchronizing a hot-- site DO NOT USE IF YOU USE THE COLMAP Statement. USE Sourcedef file.Assumetargetdefs --setting oracle_environment variable--useful in multi oracle home situations--setenv (ORACLE_HOME="/u01/app/oracle/product/11.2.0/db112")setenv (ORACLE_SID="db112r1") --userid password password encrypted using encrypt ggsci commanduserid SOURCE,password GGS -- Just in case we can't process a record we'ldiscardfile ./dirrpt/REP_hb.dsc, append -- Use REPORTCOUNT to generate a count of records that have been processed since-- the Extract or Replicat process started-- REPORTCOUNT [EVERY] <count> {RECORDS | SECONDS | MINUTES | HOURS} [, RAREPORTCOUNT EVERY 5 MINUTES, RATE include ./dirprm/HB_Rep.inc map <SOURCE_SCHEMA>.* ,target <TARGET_SCHEMA>.*;

Conclusion

In order to calculate the true lag you will need to add the heartbeat table into the extracts (extract and data pump)

and replicats. By using the tokens that are added to the trail and the commit time on the target you can tell the true

lag between systems even with low data flow. Also using this method you can tell on the target if the data flow from

the source has been interrupted because you can check the last update time and compare that to the current time.

It is critical that clocks on both the source and target systems are in sync. Note, OGG does correct the commit

timestamp for differences between the source and target sys

Parameter and Sql Scripts – Examples

You will need to edit the following scripts and manually change the variables to the correct values for your system.

You may need to change to the User that you are installing the scripts under for each system.

the users "SOURCE" and "TARGET", you will need to update based on your configuration. You will need to run the

same scripts for the source and target, but the variables may be different between the two systems.

del_lag.sql

set pagesize 200 col "Total Lag" format a30 col "Extract Lag" format a30 col "Pump Lag" format a30 select DELGROUP, (SOURCE_COMMIT - CAPTIME ) "Extract Lag",(SOURCE_COMMIT - PMPTIME ) "Pump Lag",(SOURCE_COMMIT - TARGET_COMMIT ) "Total Lag"from target.ggs_heartbeat_history order by id;

EAT TABLE FOR MONITORING LAG TIMES

This is an example of a complete Replicat parameter file:

Use ASSUMETARGETDEFS when the source and target tables specified with a MAP statement have the same column structure, such as when synchronizing a hot site DO NOT USE IF YOU USE THE COLMAP Statement. USE Sourcedef file.

setting oracle_environment variable i oracle home situations

setenv (ORACLE_HOME="/u01/app/oracle/product/11.2.0/db112")

userid password password encrypted using encrypt ggsci command

Just in case we can't process a record we'll dump info here discardfile ./dirrpt/REP_hb.dsc, append

Use REPORTCOUNT to generate a count of records that have been processed sincethe Extract or Replicat process started REPORTCOUNT [EVERY] <count> {RECORDS | SECONDS | MINUTES | HOURS} [, RATE]

map <SOURCE_SCHEMA>.* ,target <TARGET_SCHEMA>.*;

In order to calculate the true lag you will need to add the heartbeat table into the extracts (extract and data pump)

ts. By using the tokens that are added to the trail and the commit time on the target you can tell the true

lag between systems even with low data flow. Also using this method you can tell on the target if the data flow from

d because you can check the last update time and compare that to the current time.

It is critical that clocks on both the source and target systems are in sync. Note, OGG does correct the commit

timestamp for differences between the source and target systems.

Examples

You will need to edit the following scripts and manually change the variables to the correct values for your system.

You may need to change to the User that you are installing the scripts under for each system. In the example I use

the users "SOURCE" and "TARGET", you will need to update based on your configuration. You will need to run the

same scripts for the source and target, but the variables may be different between the two systems.

CAPTIME ) "Extract Lag", PMPTIME ) "Pump Lag", TARGET_COMMIT ) "Total Lag"

_history order by id;

Use REPORTCOUNT to generate a count of records that have been processed since

In order to calculate the true lag you will need to add the heartbeat table into the extracts (extract and data pump)

ts. By using the tokens that are added to the trail and the commit time on the target you can tell the true

lag between systems even with low data flow. Also using this method you can tell on the target if the data flow from

d because you can check the last update time and compare that to the current time.

It is critical that clocks on both the source and target systems are in sync. Note, OGG does correct the commit

You will need to edit the following scripts and manually change the variables to the correct values for your system.

In the example I use

the users "SOURCE" and "TARGET", you will need to update based on your configuration. You will need to run the

11 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

Ext_hb.prm

-- Extract example for Heartbeat -- 4-9-10 SGEORGE -- EXTRACT ext_hb SETENV (ORACLE_SID=db112r1) -- Use USERID to specify the type of database authentication for GoldenGate to use.USERID SOURCE password GGS TRANLOGOPTIONS DBLOGREADER EXTTRAIL ./dirdat/HB -- Use DISCARDFILE to generate a discard file to which Extract or Replicat can log-- records that it cannot process. GoldenGate creates the specified discard file in-- the dirrpt sub-directory of the GoldenGate installatthe -- discard file for problem-solving.DISCARDFILE ./dirrpt/ext_hb.dsc, APPEND-- Use REPORTCOUNT to generate a count of records that have been processed since-- the Extract or Replicat process started-- REPORTCOUNT [EVERY] <count> {RECORDS | SECONDS | MINUTES | HOURS} [, RATE]REPORTCOUNT EVERY 5 MINUTES, RATE -- Use FETCHOPTIONS to control certain aspects of the way that GoldenGate fetchesFETCHOPTIONS, NOUSESNAPSHOT, NOUSELATESTVERSION, MISSINGROW REPORT-- Use STATOPTIONS to specify information to be included in statistical displays-- generated by the STATS EXTRACT or STATS REPLICAT command.STATOPTIONS REPORTFETCH -- This is the Heartbeat table include dirprm/HB_Extract.inc -- The implementation of this parameter varies-- TABLE source.*;

5.3 HB_DBMS_SCHEDULER.sql

-- connect / as sysdba accept ogg_user prompt 'GoldenGate User name:'grant select on v_$instance to &&ogg_user;grant select on v_$database to &&ogg_user;BEGIN SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '&&ogg_user..OGG_HB', defer => false, force => false);END; / CREATE OR REPLACE PROCEDURE &&ogg_user..gg_update_hb_tab ISv_thread_num NUMBER; v_db_unique_name VARCHAR2 (128); BEGIN SELECT db_unique_name INTO v_db_unique_name FROM v$database; UPDATE &&ogg_user..heartbeat SET update_timestamp = SYSTIMESTAMP,src_db = v_db_unique_name;

EAT TABLE FOR MONITORING LAG TIMES

Use USERID to specify the type of database authentication for GoldenGate to use.

Use DISCARDFILE to generate a discard file to which Extract or Replicat can logrecords that it cannot process. GoldenGate creates the specified discard file in

directory of the GoldenGate installation directory. You can use

solving. DISCARDFILE ./dirrpt/ext_hb.dsc, APPEND

Use REPORTCOUNT to generate a count of records that have been processed sincethe Extract or Replicat process started

<count> {RECORDS | SECONDS | MINUTES | HOURS} [, RATE]

Use FETCHOPTIONS to control certain aspects of the way that GoldenGate fetchesFETCHOPTIONS, NOUSESNAPSHOT, NOUSELATESTVERSION, MISSINGROW REPORT

to specify information to be included in statistical displaysgenerated by the STATS EXTRACT or STATS REPLICAT command.

The implementation of this parameter varies depending on the process.

accept ogg_user prompt 'GoldenGate User name:' grant select on v_$instance to &&ogg_user; grant select on v_$database to &&ogg_user;

LER.DROP_JOB(job_name => '&&ogg_user..OGG_HB', defer => false, force => false);

CREATE OR REPLACE PROCEDURE &&ogg_user..gg_update_hb_tab IS

SET update_timestamp = SYSTIMESTAMP

Use USERID to specify the type of database authentication for GoldenGate to use.

Use DISCARDFILE to generate a discard file to which Extract or Replicat can log records that it cannot process. GoldenGate creates the specified discard file in

ion directory. You can use

Use REPORTCOUNT to generate a count of records that have been processed since

Use FETCHOPTIONS to control certain aspects of the way that GoldenGate fetches

to specify information to be included in statistical displays

12 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

COMMIT; END; / BEGIN SYS.DBMS_SCHEDULER.CREATE_JOB ( job_name => '&&ogg_user..OGG_HB', job_type => 'STORED_PROCEDURE', job_action => '&&ogg_user..GG_UPDATE_HB_TAB',number_of_arguments => 0, start_date => NULL, repeat_interval => 'FREQ=MINUTELY',end_date => NULL, job_class => '"SYS"."DEFAULT_JOB_CLASS"',enabled => FALSE, auto_drop => FALSE, comments => 'GoldenGate', credential_name => NULL, destination_name => NULL); SYS.DBMS_SCHEDULER.SET_ATTRIBUTE( name => '&&ogg_user..OGG_HB', attribute => 'restartable', value => TRUE); SYS.DBMS_SCHEDULER.SET_ATTRIBUTE( name => '&&ogg_user..OGG_HB', attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF); SYS.DBMS_SCHEDULER.enable( name => '&&ogg_user..OGG_HB'); END; /

HB_Extract.inc

-- HB_Extract.inc -- Heartbeat Table -- update 9-1-12 SGEORGE - no checkpoint info.TABLE SOURCE.HEARTBEAT, TOKENS ( CAPGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"),CAPTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),EDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"),EDMLDELTASTATS = @GETENV ("DELTASTATS", "DML"));

HB_pmp.inc

-- HB_pmp.inc -- Heartbeat Table table SOURCE.heartbeat, TOKENS ( PMPGROUP = @GETENV ("GGENVIRONMENT","GROUPNAME"),PMPTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")));

EAT TABLE FOR MONITORING LAG TIMES

job_action => '&&ogg_user..GG_UPDATE_HB_TAB',

repeat_interval => 'FREQ=MINUTELY',

job_class => '"SYS"."DEFAULT_JOB_CLASS"',

attribute => 'restartable', value => TRUE);

attribute => 'logging_level', value => DBMS_SCHEDULER.LOGGING_OFF);

no checkpoint info.

CAPGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"), DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

EDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"), EDMLDELTASTATS = @GETENV ("DELTASTATS", "DML")

PMPGROUP = @GETENV ("GGENVIRONMENT","GROUPNAME"), DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP"))

DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP"))

13 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

HB_Rep.inc

-- Heartbeat table MAP target.HEARTBEAT, TARGET SOURCE.GGS_HEARTBEAT,KEYCOLS (DELGROUP), INSERTMISSINGUPDATES, COLMAP (USEDEFAULTS, ID = 0, SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"),EXTRACT_NAME = @TOKEN ("CAPGROUP"),CAPTIME = @TOKEN ("CAPTIME"), PMPGROUP = @TOKEN ("PMPGROUP"), PMPTIME = @TOKEN ("PMPTIME"), DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"),DELTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),EDDLDELTASTATS = @TOKEN ("EDDLDELTASTATS"),EDMLDELTASTATS = @TOKEN ("EDMLDELTASTATS")RDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"),RDMLDELTASTATS = @GETENV ("DELTASTATS", "DML")); MAP target.HEARTBEAT, TARGET SOURCE.GGS_HEARTBEAT_HISTORY,KEYCOLS (ID), INSERTALLRECORDS, COLMAP (USEDEFAULTS, ID = 0, SOURCE_COMMIT = @GETENV ("GGHEADER"EXTRACT_NAME = @TOKEN ("CAPGROUP"),CAPTIME = @TOKEN ("CAPTIME"), PMPGROUP = @TOKEN ("PMPGROUP"), PMPTIME = @TOKEN ("PMPTIME"), DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"),DELTIME = @DATE ("YYYY-MM-DD HH:MI:SS.FFFFFF","JTS",@GEDDLDELTASTATS = @TOKEN ("EDDLDELTASTATS"),EDMLDELTASTATS = @TOKEN ("EDMLDELTASTATS"),RDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"),RDMLDELTASTATS = @GETENV ("DELTASTATS", "DML"));

HB_Stop.sql

set verify off set linesize 200 set pagesize 80 col OWNER format a15 col JOB_NAME format a20 col JOB_CLASS format a20 col NEXT_RUN_DATE format a40 col REPEAT_INTERVAL format a50 accept ogg_user prompt 'GoldenGate User name:' select owner, job_name, job_class,

EAT TABLE FOR MONITORING LAG TIMES

TBEAT, TARGET SOURCE.GGS_HEARTBEAT,

SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), EXTRACT_NAME = @TOKEN ("CAPGROUP"),

DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"), DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

EDDLDELTASTATS = @TOKEN ("EDDLDELTASTATS"), EDMLDELTASTATS = @TOKEN ("EDMLDELTASTATS"), RDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"), RDMLDELTASTATS = @GETENV ("DELTASTATS", "DML")

MAP target.HEARTBEAT, TARGET SOURCE.GGS_HEARTBEAT_HISTORY,

SOURCE_COMMIT = @GETENV ("GGHEADER", "COMMITTIMESTAMP"), EXTRACT_NAME = @TOKEN ("CAPGROUP"),

DELGROUP = @GETENV ("GGENVIRONMENT", "GROUPNAME"), DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

EDDLDELTASTATS = @TOKEN ("EDDLDELTASTATS"), EDMLDELTASTATS = @TOKEN ("EDMLDELTASTATS"), RDDLDELTASTATS = @GETENV ("DELTASTATS", "DDL"), RDMLDELTASTATS = @GETENV ("DELTASTATS", "DML")

accept ogg_user prompt 'GoldenGate User name:'

DD HH:MI:SS.FFFFFF","JTS",@GETENV ("JULIANTIMESTAMP")),

ETENV ("JULIANTIMESTAMP")),

14 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

enabled, next_run_date, repeat_interval from dba_scheduler_jobs where owner = decode(upper('&&ogg_user'), 'ALL', owner, upper('&&ogg_user')); COL ID format 999,999COL SRC_DB format a10 COL EXTRACT_NAME format a8 COL SOURCE_COMMIT format a28 COL TARGET_COMMIT format a28 COL CAPTIME format a28 COL CAPLAG format 999.000COL PMPTIME format a28 COL PMPGROUP format a8 COL PMPLAG format 999.000COL DELTIME format a28 COL DELGROUP format a8 COL DELLAG format 999.000COL TOTALLAG format 999.000COL THREAD format 99 COL UPDATE_TIMESTAMP format a28 select * from &&ogg_user.heartbeat; BEGIN SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '"&&ogg_user"."OGG_HB"',defer => false, force => true); END; / select owner, job_name, job_class, enabled, next_run_date, repeat_interval from dba_scheduler_jobs where owner = decode(upper('&&ogg_user'), 'ALL', owner, upper('&&ogg_user'));

HB_table.sql

set pagesize 200 col Lag format a30 col SOURCE_COMMIT format a30 col TARGET_COMMIT format a30 col CAPTIME format a30 col PMPTIME format a30 col DELTIME format a30 col START_TIME format a30

EAT TABLE FOR MONITORING LAG TIMES

owner = decode(upper('&&ogg_user'), 'ALL', owner, upper('&&ogg_user'))

COL ID format 999,999

COL CAPLAG format 999.000

COL PMPLAG format 999.000

COL DELLAG format 999.000 COL TOTALLAG format 999.000

select * from &&ogg_user.heartbeat;

SYS.DBMS_SCHEDULER.DROP_JOB(job_name => '"&&ogg_user"."OGG_HB"',

owner = decode(upper('&&ogg_user'), 'ALL', owner, upper('&&ogg_user'))

15 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

col RECOVERY_TIMESTAMP format a30 col UPDATE_TIMESTAMP format a30 accept ogg_user prompt 'GoldenGate User name:' select * from &&ogg_user.ggs_heartbeat;

heartbeat_check.sql

set echo off set concat , set verify off COL ID format 999,999COL SRC_DB format a10 COL EXTRACT_NAME format a8 COL SOURCE_COMMIT format a28 COL TARGET_COMMIT format a28 COL CAPTIME format a28 COL CAPLAG format 999.000COL PMPTIME format a28 COL PMPGROUP format a8 COL PMPLAG format 999.000COL DELTIME format a28 COL DELGROUP format a8 COL DELLAG format 999.000COL TOTALLAG format 999.000COL THREAD format 99 COL UPDATE_TIMESTAMP format a28 accept ogg_user prompt 'GoldenGate User name:' select * from &&ogg_user.GGS_HEARTBEAT; select src_db, extract_name, Caplag, pmplag, dellag, totallag, UPDATE_TIMESTAMP from &&ogg_user.GGS_HEARTBEAT;

heartbeat_tables_v11.sql

This script is used to create the heartbeat tables. In environments that are only running OGG in one direction, the GGS_Heartbeat and only need to be created on the target system.In bi-directional configurations you will need all tables on both sides. -- Heartbeat table V11 -- This is created on the SOURCE system-- Last update -- 10-30-08 SGEORGE -- 11-25-08 Table updated to match target.-- PK is different on source.-- 3-19-10 SGEORGE - changed format, updated for timestamp.-- 9-1-12 SGEORGE - Updated HB table for Bi-- 3-4-13 SGEORGE - Updated script to prompt for schema name.

EAT TABLE FOR MONITORING LAG TIMES

'GoldenGate User name:'

select * from &&ogg_user.ggs_heartbeat;

COL ID format 999,999

COL CAPLAG format 999.000

COL PMPLAG format 999.000

COL DELLAG format 999.000 COL TOTALLAG format 999.000

accept ogg_user prompt 'GoldenGate User name:'

select * from &&ogg_user.GGS_HEARTBEAT;

tract_name, Caplag, pmplag, dellag, totallag, UPDATE_TIMESTAMP from

This script is used to create the heartbeat tables. In environments that are only running OGG in one direction, the GGS_Heartbeat and the GGS_HEARTBEAT_HISTORY tables only need to be created on the target system.

directional configurations you will need all tables on both sides.

This is created on the SOURCE system

08 Table updated to match target. PK is different on source.

changed format, updated for timestamp. Updated HB table for Bi-directional Updated script to prompt for schema name.

tract_name, Caplag, pmplag, dellag, totallag, UPDATE_TIMESTAMP from

This script is used to create the heartbeat tables. In environments that are only the GGS_HEARTBEAT_HISTORY tables

16 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

accept ogg_user prompt 'GoldenGate User name:' drop table &&ogg_user..heartbeat; -- Create table statement CREATE TABLE &&ogg_user..HEARTBEAT ( ID NUMBER , SRC_DB VARCHAR2(30), EXTRACT_NAME varchar2(8), SOURCE_COMMIT TIMESTAMP, TARGET_COMMIT TIMESTAMP, CAPTIME TIMESTAMP, CAPLAG NUMBER, PMPTIME TIMESTAMP, PMPGROUP VARCHAR2(8 BYTE), PMPLAG NUMBER, DELTIME TIMESTAMP, DELGROUP VARCHAR2(8 BYTE), DELLAG NUMBER, TOTALLAG NUMBER, thread number, update_timestamp timestamp, EDDLDELTASTATS number, EDMLDELTASTATS number, RDDLDELTASTATS number, RDMLDELTASTATS number, CONSTRAINT HEARTBEAT_PK PRIMARY KEY (SRC_DB) ENABLE) / -- this assumes that the table is emptyINSERT INTO &&ogg_user..HEARTBEAT (SRC_DB) select db_unique_name from V$database;commit; DROP SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_ID ;CREATE SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_ID INCREMENT BY 1 START WITH 1 ORDER ;DROP TABLE &&ogg_user..GGS_HEARTBEAT;CREATE TABLE &&ogg_user..GGS_HEARTBEAT( ID NUMBER , SRC_DB VARCHAR2(30), EXTRACT_NAME varchar2(8), SOURCE_COMMIT TIMESTAMP, TARGET_COMMIT TIMESTAMP, CAPTIME TIMESTAMP, CAPLAG NUMBER, PMPTIME TIMESTAMP, PMPGROUP VARCHAR2(8 BYTE), PMPLAG NUMBER, DELTIME TIMESTAMP, DELGROUP VARCHAR2(8 BYTE), DELLAG NUMBER, TOTALLAG NUMBER, thread number, update_timestamp timestamp, EDDLDELTASTATS number, EDMLDELTASTATS number, RDDLDELTASTATS number, RDMLDELTASTATS number, CONSTRAINT GGS_HEARTBEAT_PK PRIMARY KEY (DELGROUP) ENABLE);

EAT TABLE FOR MONITORING LAG TIMES

ccept ogg_user prompt 'GoldenGate User name:'

CONSTRAINT HEARTBEAT_PK PRIMARY KEY (SRC_DB) ENABLE

this assumes that the table is empty INSERT INTO &&ogg_user..HEARTBEAT (SRC_DB) select db_unique_name from V$database;

DROP SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_ID ; CREATE SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_ID INCREMENT BY 1 START WITH 1 ORDER ;DROP TABLE &&ogg_user..GGS_HEARTBEAT; CREATE TABLE &&ogg_user..GGS_HEARTBEAT

CONSTRAINT GGS_HEARTBEAT_PK PRIMARY KEY (DELGROUP) ENABLE

INSERT INTO &&ogg_user..HEARTBEAT (SRC_DB) select db_unique_name from V$database;

CREATE SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_ID INCREMENT BY 1 START WITH 1 ORDER ;

17 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

CREATE OR REPLACE TRIGGER &&ogg_user..GGS_HEARTBEAT_TRIGBEFORE INSERT OR UPDATE ON &&ogg_user..GGS_HEARTBEATFOR EACH ROW BEGIN select seq_ggs_HEARTBEAT_id.nextvalinto :NEW.ID from dual; select systimestamp into :NEW.target_COMMIT from dual; select trunc(to_number(substr((:NEW.CAPTIME instr(:NEW.CAPTIME - :NEW.SOURCE_COMMIT,' ')))) * 86400+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+7,2)) + to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 10into :NEW.CAPLAG from dual; select trunc(to_number(substr((:NEW.PMPTIME :NEW.CAPTIME,' ')))) * 86400 + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+1,2)) * 3600 + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+4,2) ) * 60 + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+7,2)) + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000into :NEW.PMPLAG from dual; select trunc(to_number(substr((:NEW.DELTIME :NEW.PMPTIME,' ')))) * 86400 + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+1,2)) * 3600 + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+4,2) ) * 60 + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+7,2)) + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000into :NEW.DELLAG from dual; select trunc(to_number(substr((:NEW.TARGET_COMMIT instr(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT,' ')))) * 86400+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+7,2))+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000into :NEW.TOTALLAG from dual; end ; / ALTER TRIGGER &&ogg_user..GGS_HEARTBEAT_TRIG ENABLE;

EAT TABLE FOR MONITORING LAG TIMES

CREATE OR REPLACE TRIGGER &&ogg_user..GGS_HEARTBEAT_TRIG BEFORE INSERT OR UPDATE ON &&ogg_user..GGS_HEARTBEAT

select seq_ggs_HEARTBEAT_id.nextval

select trunc(to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT ),1, :NEW.SOURCE_COMMIT,' ')))) * 86400

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - .SOURCE_COMMIT),' ')+4,2) ) * 60

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME -

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000

select trunc(to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),1, instr(:NEW.PMPTIME

:NEW.CAPTIME), instr((:NEW.PMPTIME -

:NEW.CAPTIME), instr((:NEW.PMPTIME -

:NEW.CAPTIME), instr((:NEW.PMPTIME -

:NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000

select trunc(to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),1, instr(:NEW.DELTIME

:NEW.PMPTIME), instr((:NEW.DELTIME -

:NEW.PMPTIME), instr((:NEW.DELTIME -

:NEW.PMPTIME), instr((:NEW.DELTIME -

:NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000

select trunc(to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),1, :NEW.SOURCE_COMMIT,' ')))) * 86400

o_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60

_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+7,2))

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000

ALTER TRIGGER &&ogg_user..GGS_HEARTBEAT_TRIG ENABLE;

:NEW.CAPTIME),1, instr(:NEW.PMPTIME -

:NEW.PMPTIME),1, instr(:NEW.DELTIME -

18 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

-- -- This is for the History heartbeat table-- DROP SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_HIST ;CREATE SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_HIST INCREMENT BY 1 STAR; DROP TABLE &&ogg_user..GGS_HEARTBEAT_HISTORY;CREATE TABLE &&ogg_user..GGS_HEARTBEAT_HISTORY( ID NUMBER , SRC_DB VARCHAR2(30), EXTRACT_NAME varchar2(8), SOURCE_COMMIT TIMESTAMP, TARGET_COMMIT TIMESTAMP, CAPTIME TIMESTAMP, CAPLAG NUMBER, PMPTIME TIMESTAMP, PMPGROUP VARCHAR2(8 BYTE), PMPLAG NUMBER, DELTIME TIMESTAMP, DELGROUP VARCHAR2(8 BYTE), DELLAG NUMBER, TOTALLAG NUMBER, thread number, update_timestamp timestamp, EDDLDELTASTATS number, EDMLDELTASTATS number, RDDLDELTASTATS number, RDMLDELTASTATS number ); CREATE OR REPLACE TRIGGER &&ogg_user..GGS_HEARTBEAT_TRIG_HISTBEFORE INSERT OR UPDATE ON &&ogg_user..GGS_HEARTBEAT_HISTORYFOR EACH ROW BEGIN select seq_ggs_HEARTBEAT_HIST.nextvalinto :NEW.ID from dual; select systimestamp into :NEW.target_COMMIT from dual; select trunc(to_number(substr((:NEW.CAPTIME instr(:NEW.CAPTIME - :NEW.SOURCE_COMMIT,' ')))) * 86400+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60+ to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+7,2)) + to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000into :NEW.CAPLAG from dual; select trunc(to_number(substr((:NEW.PMPTIME :NEW.CAPTIME,' ')))) * 86400 + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+1,2)) * 3600 + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+4,2) ) * 60 + to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+7,2))

EAT TABLE FOR MONITORING LAG TIMES

This is for the History heartbeat table

DROP SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_HIST ; CREATE SEQUENCE &&ogg_user..SEQ_GGS_HEARTBEAT_HIST INCREMENT BY 1 START WITH 1 ORDER

DROP TABLE &&ogg_user..GGS_HEARTBEAT_HISTORY; CREATE TABLE &&ogg_user..GGS_HEARTBEAT_HISTORY

CREATE OR REPLACE TRIGGER &&ogg_user..GGS_HEARTBEAT_TRIG_HIST BEFORE INSERT OR UPDATE ON &&ogg_user..GGS_HEARTBEAT_HISTORY

select seq_ggs_HEARTBEAT_HIST.nextval

select trunc(to_number(substr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT ),1, :NEW.SOURCE_COMMIT,' ')))) * 86400

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - E_COMMIT),' ')+1,2)) * 3600

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME -

:NEW.SOURCE_COMMIT), instr((:NEW.CAPTIME - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000

select trunc(to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),1, instr(:NEW.PMPTIME

:NEW.CAPTIME), instr((:NEW.PMPTIME -

:NEW.CAPTIME), instr((:NEW.PMPTIME -

:NEW.CAPTIME), instr((:NEW.PMPTIME -

T WITH 1 ORDER

:NEW.CAPTIME),1, instr(:NEW.PMPTIME -

19 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

+ to_number(substr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000into :NEW.PMPLAG from dual; select trunc(to_number(substr((:NEW.DELTIME :NEW.PMPTIME,' ')))) * 86400 + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+1,2)) * 3600 + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+4,2) ) * 60 + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+7,2)) + to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000into :NEW.DELLAG from dual; select trunc(to_number(substr((:NEW.TARGET_COMMIT instr(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT,' ')))) * 86400+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+7,2))+ to_number(substr((:NEW.TARGET_COMMIT instr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000into :NEW.TOTALLAG from dual; end ; / ALTER TRIGGER &&ogg_user..GGS_HEARTBEAT_TRIG_HIST ENABLE;

mgr.prm

COMMENT ************************************COMMENT * Primary System MANAGER CONFIG FILE *COMMENT * Filename: mgr.prm COMMENT * Purpose: The MGR process specifies the port for which all golden *COMMENT * gate processes will communicate. *COMMENT ***********************************************************************-- -- 4-21-09 - Updated for GGS 10 SGEORGE PORT 8999 -- DYNAMICPORTLIST was 7810-7899 COMMENT ***********************************************************************COMMENT * Golden gate trails matching es* will be purged after there has been *COMMENT * no activity for 5 days. *COMMENT ******************************* -- PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 5 COMMENT ***********************************************************************COMMENT * Automatically start any extract and replicat processes at sta

EAT TABLE FOR MONITORING LAG TIMES

:NEW.CAPTIME), instr((:NEW.PMPTIME - :NEW.CAPTIME),' ')+10,6)) / 1000000

select trunc(to_number(substr((:NEW.DELTIME - :NEW.PMPTIME),1, instr(:NEW.DELTIME

:NEW.PMPTIME), instr((:NEW.DELTIME -

:NEW.PMPTIME), instr((:NEW.DELTIME -

:NEW.PMPTIME), instr((:NEW.DELTIME -

:NEW.PMPTIME), instr((:NEW.DELTIME - :NEW.PMPTIME),' ')+10,6)) / 1000000

(:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT),1, :NEW.SOURCE_COMMIT,' ')))) * 86400

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+1,2)) * 3600

NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+4,2) ) * 60

+ to_number(substr((:NEW.TARGET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+7,2)) ET_COMMIT - :NEW.SOURCE_COMMIT), :NEW.SOURCE_COMMIT),' ')+10,6)) / 1000000

ALTER TRIGGER &&ogg_user..GGS_HEARTBEAT_TRIG_HIST ENABLE;

COMMENT *********************************************************************** COMMENT * Primary System MANAGER CONFIG FILE *

* COMMENT * Purpose: The MGR process specifies the port for which all golden *

gate processes will communicate. * COMMENT ***********************************************************************

Updated for GGS 10 SGEORGE

******************************************************** COMMENT * Golden gate trails matching es* will be purged after there has been * COMMENT * no activity for 5 days. * COMMENT ***********************************************************************

PURGEOLDEXTRACTS ./dirdat/*, USECHECKPOINTS, MINKEEPDAYS 5

COMMENT *********************************************************************** COMMENT * Automatically start any extract and replicat processes at startup *

EW.DELTIME -

COMMENT * Primary System MANAGER CONFIG FILE *

20 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

COMMENT * and will attempt to restart any extract process that abends after *COMMENT * waiting 2 minutes, but only up to 5 attempts. *COMMENT *********************************************************************** -- These are commented out for testing-- AUTOSTART EXTRACT EXT_* -- AUTOSTART EXTRACT PMP_* -- These are commented out for testing-- AUTORESTART EXTRACT EXT_*, WAITMINUTES 2, RETRIES 5-- AUTORESTART EXTRACT PMP_*, WAITMINUTES 2, RETRIES 5 LAGREPORTHOURS 1 LAGINFOMINUTES 3 LAGCRITICALMINUTES 5

pmp_hb.prm

-- Data Pump configuration file -- last update -- 11-3-08 SGEORGE -- update 9-1-12 SGEORGE - no checkpoint info.-- extract PMP_hb -- Database login info userid SOURCE, password GGS -- Just in case we can't process a record we'll dump info herediscardfile ./dirrpt/PMP_hb.dsc, append -- Remote host and remort manager port to write trailrmthost coe-02, mgrport 9000 -- This is the Trail to where we outputrmttrail ./dirdat/hb -- Heartbeat include dirprm/HB_pmp.inc -- Table source.*;

rep_hb.prm

-- Updates -- 3-17-10 - SGEORGE - Added reporting and heartbeat table and some comments.-- 3-18-10 - SGEORGE - Updated the heartbeat table-- 9-1-12 - SGEORGE - Udated HB table removed checkpoint info no-- replicat rep_hb -- Use ASSUMETARGETDEFS when the source and target tables specified with a MAP-- statement have the same column structure, such as when synchronizing a hot-- site DO NOT USE IF YOU USE THE COLMAP Statement. USE Sourcedef fileassumetargetdefs --setting oracle_environment variable--useful in multi oracle home situations--setenv (ORACLE_HOME="/u01/app/oracle/product/11.2.0/db112")setenv (ORACLE_SID="db112r1")

EAT TABLE FOR MONITORING LAG TIMES

COMMENT * and will attempt to restart any extract process that abends after * COMMENT * waiting 2 minutes, but only up to 5 attempts. * COMMENT ***********************************************************************

se are commented out for testing

These are commented out for testing AUTORESTART EXTRACT EXT_*, WAITMINUTES 2, RETRIES 5 AUTORESTART EXTRACT PMP_*, WAITMINUTES 2, RETRIES 5

no checkpoint info.

e can't process a record we'll dump info here discardfile ./dirrpt/PMP_hb.dsc, append

Remote host and remort manager port to write trail

This is the Trail to where we output

Added reporting and heartbeat table and some comments. Updated the heartbeat table Udated HB table removed checkpoint info not needed.

Use ASSUMETARGETDEFS when the source and target tables specified with a MAP statement have the same column structure, such as when synchronizing a hot site DO NOT USE IF YOU USE THE COLMAP Statement. USE Sourcedef file.

setting oracle_environment variable useful in multi oracle home situations setenv (ORACLE_HOME="/u01/app/oracle/product/11.2.0/db112")

21 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

--userid password password encrypted using encrypt ggsci commanduserid SOURCE,password GGS -- Just in case we can't process a record we'll dump info herediscardfile ./dirrpt/REP_hb.dsc, append -- Use REPORTCOUNT to generate a count of records that have been processed since-- the Extract or Replicat process started-- REPORTCOUNT [EVERY] <count> {RECORDS | SECONDS | MINUTES | HOURS} [, RATE]REPORTCOUNT EVERY 5 MINUTES, RATE include ./dirprm/HB_Rep.inc --map <SOURCE_SCHEMA>.* ,target <TARGET_SCHEMA>.*;

Troubleshooting

SOME COMMON ISSUES WITH THE HEARTBEAT TA

Issue

Heartbeat not showing up at target

Duplicate records in the heartbeat table

No heartbeat in trail

Error with Replicat

No rows in trail

No rows in trail

Data Pump crashes because of DDL

row fetch statistics

EAT TABLE FOR MONITORING LAG TIMES

userid password password encrypted using encrypt ggsci command

Just in case we can't process a record we'll dump info here discardfile ./dirrpt/REP_hb.dsc, append

Use REPORTCOUNT to generate a count of records that have been processed sincethe Extract or Replicat process started REPORTCOUNT [EVERY] <count> {RECORDS | SECONDS | MINUTES | HOURS} [, RATE]

map <SOURCE_SCHEMA>.* ,target <TARGET_SCHEMA>.*;

ITH THE HEARTBEAT TABLES:

Solution

Make sure the heartbeat table is not in an "excluded

user" schema. Common issue in bi-directional

configurations.

Check mapping in the datapump. Make sure you are

capturing the heartbeat twice, once with the MAP

statement and a second with a wildcard map statement

Check to make sure the source heartbeat table has one

row for each thread and the thread numbers are listed.

The heartbeat table is updated by which thread the

heartbeat is running on.

Depending on the version of GoldenGate you are using,

you may need to replace double quotes " with single

quotes '. this is true for 12c.

Make sure user that is performing the update on the

heartbeat is not an "excluded user". Test by manually

updating the table from another user.

Be careful about upgrading from older version of the

heartbeat. Older versions used filters and the new

version does not.

"PUMP Abends Error: OGG-01161" If you want to

replicat DDL you will need to use the NOPASSTHRU for

the heartbeat table and PASSTHRU for all the other

tables.

It has been reported that fetch statistics show up in the extract for the heartbeat table. I have been unable to

Use REPORTCOUNT to generate a count of records that have been processed since

"excluded

Check mapping in the datapump. Make sure you are not

statement and a second with a wildcard map statement.

Check to make sure the source heartbeat table has one

row for each thread and the thread numbers are listed.

le is updated by which thread the

Depending on the version of GoldenGate you are using,

you may need to replace double quotes " with single

is performing the update on the

"excluded user". Test by manually

Be careful about upgrading from older version of the

heartbeat. Older versions used filters and the new

If you want to

replicat DDL you will need to use the NOPASSTHRU for

the heartbeat table and PASSTHRU for all the other

h statistics show up in the extract for the heartbeat table. I have been unable to

22 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

Oracle Corporation, Wo

500 Oracle Parkway

Redwood Shores, CA 94065, USA

EAT TABLE FOR MONITORING LAG TIMES

reproduce, but suggested solution was to add a where clause to the heartbeat update statement. Ie “WHERE src_db = v_db_unique_name; “.

Oracle Corporation, World Headquarters

500 Oracle Parkway

Redwood Shores, CA 94065, USA

Worldwide Inquiries

Phone: +1.650.506.7000

Fax: +1.650.506.7200

reproduce, but suggested solution was to add a where clause to the heartbeat update statement. Ie “WHERE

Worldwide Inquiries

Phone: +1.650.506.7000

Fax: +1.650.506.7200

2 | ORACLE GOLDENGATE BEST PRACTICES: HEARTBEAT

Copyright © 2014, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposcontents hereof are subject to change without notice. This document is not warranted to be errorwarranties or conditions, whether expressed orally or implied in law, including implied warranties and conditiofitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obformed either directly or indirectly by this document. This document may not be reproduced omeans, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under licensare trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logotrademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. White Paper TitleDecember 2014Author: Steven George

CON N EC T W I T H U S

blogs.oracle.com/oracle

facebook.com/oracle

twitter.com/oracle

oracle.com

TABLE FOR MONITORING LAG TIMES

Copyright © 2014, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposcontents hereof are subject to change without notice. This document is not warranted to be errorwarranties or conditions, whether expressed orally or implied in law, including implied warranties and conditiofitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obformed either directly or indirectly by this document. This document may not be reproduced omeans, electronic or mechanical, for any purpose, without our prior written permission.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under licensare trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logotrademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

White Paper Title December 2014 uthor: Steven George

Copyright © 2014, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 1214