68
NoCOUG Summer Conference, Aug. 19, 2004 4750 Walnut Street, Suite 106 James F. Koopmann Boulder, Colorado 80301 Director of Technical Consulting 303.938.8282 x115 720.219.3773 (mobile) jameskoopmann@confio.com www.confio.com Get a handle on DDL using Oracle 9iR2 Streams

NoCOUG Summer Conference, Aug. 19, 2004 4750 Walnut Street, Suite 106 James F. Koopmann Boulder, Colorado 80301Director of Technical Consulting 303.938.8282

Embed Size (px)

Citation preview

NoCOUG Summer Conference, Aug. 19, 2004

4750 Walnut Street, Suite 106 James F. KoopmannBoulder, Colorado 80301 Director of Technical Consulting

303.938.8282 x115720.219.3773 (mobile)[email protected]

Get a handle on DDL using Oracle 9iR2 Streams

James F. Koopmann

N-years of IT/core RDBMS experience Oracle (V6-10g) 8i & 9i OCP Writer

Various Publications & Portals

Speaker : Here, There, & Anywhere

•3 technical articles a month•Forum expert / moderator

•Database centric vendor reviews

•Database general interest issues

Where to Find Me on the NET

1. The DBA nightmare

2. How do you detect DDL changes

3. STREAMS overview

4. STREAMS environment

5. STREAMS setup

6. STREAMS sample DDL session

7. OEM

8. Sweat Dreams for the DBA9. Running STREAMS

Content

The DBA Nightmare

• Problems

• Are problems “problems” only if seen

• Is it a problem if no one sees a benefit

• Should I be working on something that doesn’t solve a problem

• Problems are caused by change

We need to be able to answer the hard questions

• What has happened• When did it happen• Will it happen again

• We have to fix Problems

• Reduce finger pointing

• Research viable sources…Test / Validate…Acceptance

• Be weary of “Your mileage may vary” type statements

Problem Detection & Viable Solutions

Your actions are reactive, not proactive, in nature

The events that trigger an investigation are often very specific to an incident, narrow in scope, and the solutions typically do not take the full health of a database into consideration.

Most of your time is spent in problem investigation & detection, not problem solving

Investigation & detection inherently wastes money

Customers / users drive the work flow of the database administrators

Database administration group is seen as ineffective

The DBA Nightmare

Lazy Detection Methodology Shortcomings

Has the issue been seen by others? Will someone benefit from you working

on the issue? Are you solving a real problem that is

causing pain for your users? Before solving the problem, are you be

able to determine what was the true root cause?

Will solving the problem eliminate it from recurring?

The DBA Nightmare

Intelligent Detection Methodology Benefits

The DBA Nightmare

What Can We Detect / Solve

SQL> SELECT name, block_size,

100*(1 - (physical_reads / (db_block_gets+consistent_gets))) buffhit

FROM v$buffer_pool_statistics;

Buffer Cache - Hit Ratio

How we size the buffer cache1. Just issue the SQL to see the hit ratio2. Increase the buffer cache if hit ratio is low3. Done if hit ratio is high

The DBA Nightmare

SQL> SELECT b.name, b.block_size,

100*(1 - ((e.physical_reads-b.physical_reads) /

((e.db_block_gets-b.db_block_gets)+

(e.consistent_gets-b.consistent_gets)))) buffhit

FROM beg_buffer_pool_statistics b, end_buffer_pool_statistics e

WHERE b.name=e.name AND b.block_size=e.block_size;

How to diff1. Create table beg_buffer_pool_statistics as select * from v$buffer_pool_statistics2. Run workload through system3. Create table end_buffer_pool_statistics as select * from v$buffer_pool_statistics4. Just issue the SQL to see hit ratio5. Increase the buffer cache if hit ratio is low6. Done if hit ratio is high

Buffer Cache - diff’d Hit Ratio

The DBA Nightmare

SQL> SELECT name, block_size, size_for_estimate,

estd_physical_read_factor, estd_physical_reads

FROM V$DB_CACHE_ADVICE

WHERE advice_status = 'ON';

Buffer Cache - Advisory

How to get the statistics1. Set the db_cache_advice to ‘READY’2. Set the db_cache_advice to ‘ON’3. Run a valid workload through the system4. Just issue the SQL to see report

The DBA Nightmare

BLOCK Cache Estd Phys Estd PhysNAME SIZE Size Read Factor Reads-------- ----- ----- ----------- ----------DEFAULT 8192 48 2.1133 343669940DEFAULT 8192 96 1.7266 280783364DEFAULT 8192 144 1.4763 240091867DEFAULT 8192 192 1.3573 220733606DEFAULT 8192 240 1.2801 208181172DEFAULT 8192 288 1.2165 197842182DEFAULT 8192 336 1.1686 190052917DEFAULT 8192 384 1.1202 182180544DEFAULT 8192 432 1.0877 176884743DEFAULT 8192 480 1.0602 172420984DEFAULT 8192 528 1.0196 165812231DEFAULT 8192 544 1 162626093DEFAULT 8192 576 .9765 158797352DEFAULT 8192 624 .9392 152735392DEFAULT 8192 672 .9216 149879874DEFAULT 8192 720 .9013 146571255DEFAULT 8192 768 .885 143928671DEFAULT 8192 816 .8726 141908868DEFAULT 8192 864 .8607 139972381

DEFAULT 8192 544 1.00 162626093 CURRENT

Its all about reads

1. Don’t reduce the size of your cache if you are going to incur more physical reads

2. Don’t increase the size of your cache if you are not going to reduce the number of reads

Buffer Cache - Advisory Report

The DBA Nightmare

How Do You Detect DDL Changes

What kind of shop are you in

1. You run a tight / secure database shop. (does it need to be tighter)

2. Changes are kept to a minimal. (will one slip by)

3. The time window is not sufficiently large between DDL changes.

4. You need an inexpensive investigative approach for DDL extraction.

5. Do you hedge on the side of caution or paranoia

6. Do you need to be concerned with each and every DDL statement

7. Are you pro-active

8. Do you like systematic approaches to solving problems

9. Do you need to be notified of problems

Do I Care About Capturing DDL

The DBA Nightmare

Have you tried to set policies?

I don’t know how many shops I have worked in and tried to set policies about :• NO changes after 3:00pm on friday

• Can’t use production for Test / Stress / QA

• ALL changes must be approved through DBA

• ALL changes must be implemented by DBA

• ALL source must go through QA & Test

• and the list goes on and on and on and on

Bottom line

1. Startup fallacy 2. Everyone is in a rush 3. Most don’t want to do the job the right or proper way the first time

Your get a call at 4am on Sunday

You spend time with data integrity

This change MUST be in today

You don’t know what anyone is doing

Not one will tell you what they are doing

You have to pick up the pieces

Attempts at DDL stability through policy

How Do You Detect DDL Changes

1. Take a look at Change DATEs in DBA views

• SELECT Statements against DBA views

2. Compare Database Objects Over Time

• Take Snapshot at T1

• Take Snapshot at T2

• Compare T1 to T2

3. Use LogMiner

• Search and Extract DDL from Current Redo Logs & Archived Logs

4. Streams

• Set of database structures to capture and replicate changes from a source database to a destination database

Methodolgies

How Do You Detect DDL Changes

Object Date Field What

DBA_OBJECTS YES CREATED LAST_DDL_TIME

DBA_TABLES Yes LAST_ANALYZED

DBA_INDEXES Yes LAST_ANALYZED

DBA_TAB_COLUMNS NO

DBA_IND_COLUMNS NO

The Good - We know when object is added / changedThe Bad - No GranularityThe Ugly - We have to still validate a change through looking at the total object

Take a Look at Dates In DBA Views

How Do You Detect DDL Changes

1. T0 - CREATE TABLE time_dba_tables with DATE-TYPE column

2. T1 - INSERT INTO time_dba_tables (SELECT sysdate,* FROM dba_tables)

3. T2 - INSERT INTO time_dba_tables (SELECT sysdate,* FROM dba_tables)

4. TC - SELECT <changes> FROM time_dba_tables t1, time_dba_tables t2

OUTER JOINS - for what was added and deleted

DECODES - for proper column comparison

t1.column != t2.column - WHERE Cluase is “OR’d” for interested columns

The Good - Build Yourself. No reliance on 3rd party software or database vendorThe Bad - Complicated code. (you better be good)The Ugly - Oracle could change / add tables that are important to object change

Compare Database Objects Over time

How Do You Detect DDL Changes

1. Lets you look at the redo log activity in your database

2. You can look in the current redo logs or archived logs

3. New in 9i is the ability to look at DDL statements

LogMiner

How Do You Detect DDL Changes

1. Create a dictionary EXECUTE DBMS_LOGMNR_D.BUILD(

dictionary_filename => 'dictionary.log', dictionary_location => '/ora/oradata/hcmc/log');

2. Specify which log files to be monitoredEXECUTE DBMS_LOGMNR.ADD_LOGFILE(

LogFileName => '/ora/oradata/hcmc/arch/arc00075.001', Options => dbms_logmnr.NEW);

3. Start logminer EXECUTE DBMS_LOGMNR.START_LOGMNR(

DictFileName =>'/ora/oradata/hcmc/log/dictionary.log');

4. Query logminer views SELECT sql_redo FROM V$LOGMNR_CONTENTS WHERE operation=‘DDL’

The Good - Easy to setup and useThe Bad - Pinpointing the time when something might have happenedThe Ugly - Manual process of adding logs and searching through them

LogMiner

Oracle Streams

1. Is a new feature that was introduced in 9iR22. Is a more robust replication environment that allows for the sharing of

information and structures3. Improved performance of replication because Streams utilizes the extraction of

DML and DDL from the redo log files through separate background processes.4. Supports the hot mining of the current active redo log for change data at the

same time it is written to the log.5. Supports the mining of archived log files.6. No loss of data7. Directed Networks

OverviewOracle StreamsA feature within the Oracle database that allows for the replication of database structures and information between two separate databases

Oracle Streams

A directed network is defined as a networked system of hosts that allow for the passing of information to a destination database where the destination host is not directly accessible to the source host.

Two methods of information propagation.

1. Queue Forwarding, information is received by an intermediate database, and automatically forwarded to another intermediate or destination database.

2. Apply Forwarding, information is received by an intermediate database, applied, captured, and re-queued and ready for propagation to another destination database.

Directed Networks

CAPTURESET_UP_QUEUE

ADD_GLOBAL_RULES

HCMC(source)

PROPAGATIONADD_GLOBAL_PROPAGATION_RULES

APPLYSET_UP_QUEUE

CREATE TABLE history_ddl_lcrs

CREATE PROCEDURE history_ddl

ADD_GLOBAL_RULES

ALTER_APPLY SAIGON (destination)

User DDL Change

Oracle Streams

A feature within the Oracle database that allows for the replication of database structures and information between two separate databases

Streams Environment

CAPTURE

HCMC(source)

PROPAGATION

APPLY

SAIGON (destination)

User DDL Change

Data Definition Language (DDL)

SQL statements that affect the structure of database objects, such as CREATE TABLE, ALTER TABLE, DROP TABLE, and RENAME TABLE.

Streams Environment

CAPTURE

HCMC(source)

PROPAGATION

APPLY

SAIGON (destination)

User DDL Change

Source database

The database that originates information to be shared within the Oracle Streams environment

Streams Environment

CAPTURE

HCMC(source)

PROPAGATION

APPLY

SAIGON (destination)

User DDL Change

Capture A capture background process runs (cp01-cp99) on the source database for each of the capture queues defined. As the DDL is issued within the Oracle database, a set of capture rules governs what is important and extracts these statements from the redo logs. These statements are then stored (staged) in a capture queue on the source database.

Streams Environment

CAPTURE

HCMC(source)

PROPAGATION

APPLY

SAIGON (destination)

User DDL Change

Propagation A mechanism that, through a set of rules, picks up the captured statements from the source queue and then transports the statements across a network to a destination database queue.

Streams Environment

CAPTURE

HCMC(source)

PROPAGATION

APPLY

SAIGON (destination)

User DDL Change

Destination database

The database that receives information in the Oracle Streams environment.

Streams Environment

CAPTURE

HCMC(source)

PROPAGATION

APPLY SAIGON (destination)

User DDL Change

Apply The apply process (one for each capture process that is propagating to this site) runs on the destination database and picks statements off the queue that were transported from the source database. A set of apply rules then acts on the statements, either applying the changes to the database, transforming the statements into another usable form, or rejecting them.

Streams Environment

CAPTURE

HCMC(source)

PROPAGATION

APPLY

SAIGON (destination)

User DDL Change

Rules From the capture process, through the propagation process, and into the apply process, rules govern what data are of concern, where to send the data, and what to do with that data.

Streams Environment

CAPTURE

HCMC(source)

PROPAGATION

APPLY

SAIGON (destination)

User DDL Change

The Good - Able to report on every DDL statement issued without interventionThe Bad - Learning curve is a bit highThe Ugly - Intensive & cumbersome setup

Streams Environment

CAPTURESET_UP_QUEUE

ADD_GLOBAL_RULES

HCMC(source)

PROPAGATIONADD_GLOBAL_PROPAGATION_RULES

APPLYSET_UP_QUEUE

CREATE TABLE history_ddl_lcrs

CREATE PROCEDURE history_ddl

ADD_GLOBAL_RULES

ALTER_APPLY SAIGON (destination)

User DDL Change

Streams Environment

1. All databases using Oracle Streams must be upgraded to version 9.2.0.2 (patch #2632931)

2. Run catalog.sql & catproc.sql after you have upgraded to version 9.2.0.2

Software Requirements

Streams Environment

1. The source database must be running in ARCHIVELOG mode in order to capture changes.

i. No data loss

ii. No database hanging caused by LGWR waiting for capture process to complete

iii. Tune your redo log generation before implementing streams

iv. Do not use the NOLOGGING option

Archive Log Requirement

Streams Environment

Parameter Setting NotesCOMPATABLE 9.2.0 or higher. For Streams to work,

you must set this to at least 9.2.0.

JOB_QUEUE_PROCESSES 2 or higher. Should be at least 2 in the environment for each database, but should be equal to the number of dbms_jobs that can run concurrently plus 2.

LOG_PARALLELISM 1 If you’re running one or more captures on the database.

LOGMNR_MAX_PERSITENT_SESSIONS. equal to or higher than the number of capture processes.

If this parameter is set too low, capture processes will not run and may also inhibit you from dropping the capture processes.

OPEN_LINKS 4 or higher. This allows for the distributed transactions between source database and destination database to occur.

PARALLEL_MAX_SERVERS current value + (3 * capture processes) + (3 * apply processes)

This is just an estimate.Should be monitored to handle the number of parallel execution servers.

PROCESSES current value + ((capture processes + apply processes) * 10).

Again this is just an estimate. Normally I have noticed that most installations do not account for enough processes.

SHARED_POOL_SIZE current size + ((capture processes + apply processes) * 10M).

SHARED_POOL_SIZE should be at least 100M.

GLOBAL_NAMES TRUE Ensures valid database and no loopbacks

Parameter Requirements

Streams Environment

Intermission

Streams Setup

1. CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA

Normal Database Stuff

2. CREATE TABLESPACE ddlman

DATAFILE ’C:\ORACLE\ORADATA\[HCMC|SAIGON]\DDLMAN01.DBF’ SIZE 100M

REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

3. CREATE USER ddlman IDENTIFIED BY ddlman

DEFAULT TABLESPACE ddlman

QUOTA UNLIMITED ON ddlman;

4. GRANT CONNECT, RESOURCE TO ddlman;

Create Administrator

Streams Setup

1. CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA

Advanced Queuing Administration

2. GRANT EXECUTE ON DBMS_AQADM TO DDLMAN;

Streams Administration

3. GRANT EXECUTE ON DBMS_STREAMS_ADM TO DDLMAN;

4. GRANT EXECUTE ON DBMS_APPLY_ADM TO DDLMAN;

5. GRANT EXECUTE ON DBMS_CAPTURE_ADM TO DDLMAN;

6. GRANT EXECUTE ON DBMS_PROPAGATION_ADM TO DDLMAN;

Instantiation Purposes

7. GRANT EXECUTE ON DBMS_FLASHBACK TO DDLMAN;

Easy Monitoring and Usage

8. GRANT SELECT_CATALOG_ROLE TO DDLMAN;

9. GRANT SELECT ANY DICTIONARY TO DDLMAN;

10. GRANT SELECT ON DBA_APPLY_ERROR TO DDLMAN;

Grant Privileges to Administrator

Streams Setup

1. CONNECT sys/<passwd>@[hcmc|saigon] AS SYSDBA2. BEGIN

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_RULE_SET_OBJ,

grantee => ’DDLMAN’,

grant_option => FALSE);

END;

/

3. BEGIN

DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE(

privilege => DBMS_RULE_ADM.CREATE_RULE_OBJ,

grantee => ’DDLMAN’,

grant_option => FALSE);

END;

/

Grant Privileges to Administrator to Create Rules

Streams Setup

Why

1. By default, LogMiner tables are in the SYSTEM tablespace.

2. When you create a capture or apply process, Oracle will create a subset of the data dictionary to keep track of changes to structures.

3. Streams will actually keep multiple versions of object information.

4. The SYSTEM tablespace may not have enough room for these tables.

How

1. CONNECT sys/<passwd>@hcmc AS SYSDBA

2. CREATE TABLESPACE logminer DATAFILE ’C:\ORACLE\ORADATA\HCMC\LOGMINER01.DBF’ SIZE 100M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;

3. EXECUTE DBMS_LOGMNR_D.SET_TABLESPACE(’LOGMINER’);

Switch LogMiner out of the SYSTEM Tablespace

Streams Setup

What

1. If you move LogMiner after you have captured or applied, you will lose the Streams directory changes that have been recorded.

2. The Streams data dictionary is not kept clean by Oracle which can also cause greater strains on the Streams dictionary and allow it to grow uncontrollably.

To Do

1. Simple, don’t forget to move LogMiner

2. Remove an object is not being used you can clean out the Streams dictionary by using DBMS_STREAMS_ADM.PURGE_SOURCE_CATALOG for a particular object.

LogMiner / Streams Issues

Streams Setup

Why1. For transporting the captured DDL from the source database to the destination

database

How

1. CONNECT ddlman/ddlman@hcmc

2. CREATE DATABASE LINK saigon CONNECT TO ddlman IDENTIFIED BY ddlman USING ‘saigon’

Database Link from source to destination

Streams Setup

1. CONNECT ddlman/ddlman@hcmc 2. BEGIN

DBMS_STREAMS_ADM.SET_UP_QUEUE( queue_table => 'ddl_cap_table', queue_name

=> 'ddl_cap_q', queue_user => 'ddlman'); END;

/

3. BEGIN

DBMS_STREAMS_ADM.ADD_GLOBAL_RULES( streams_type => 'capture', streams_name => 'cap_ddl', queue_name => 'ddl_cap_q', include_dml => false, include_ddl => true, include_tagged_lcr => false, source_database => ‘hcmc’); END; /

Capture

Streams Setup

1. CONNECT ddlman/ddlman@hcmc 2. BEGIN

DBMS_STREAMS_ADM.ADD_GLOBAL_PROPAGATION_RULES(streams_name => 'prop_ddl',source_queue_name => 'ddl_cap_q',destination_queue_name => 'ddlman.ddl_apply_q@saigon’,include_dml => false,include_ddl => true,include_tagged_lcr => false,source_database => ‘hcmc’ );END;

/

Propagation Rules

Streams Setup

1. CONNECT ddlman/ddlman@saigon 2. BEGIN

DBMS_STREAMS_ADM.SET_UP_QUEUE(

queue_table => 'ddl_apply_table',

queue_name => 'ddl_apply_q',

queue_user => 'ddlman');

END;

/

Create Queue

Streams Setup

1. CONNECT ddlman/ddlman@saigon 2. CREATE TABLE ddlman.ddl_history(

timestamp DATE,

source_database_name VARCHAR2(128),

command_type VARCHAR2(30),

object_owner VARCHAR2(32),

object_name VARCHAR2(32),

object_type VARCHAR2(18),

ddl_text CLOB,

logon_user VARCHAR2(32),

current_schema VARCHAR2(32),

base_table_owner VARCHAR2(32),

base_table_name VARCHAR2(32),

tag RAW(10),

transaction_id VARCHAR2(10),

scn NUMBER);

Create Table to hold DDL

Streams Setup

When the capture process mines information from the redo log, it reformats this information into LCRs. These LCRs are specific to the type of information captured and the completely defines the changed that has occurred.

SYS.ANYDATAThis is an overloaded object type that can be of any scalar (number, varchar,

char,date…) or user defined data type. It has defined with it methods that allows us to query what type of true data type it holds as well as methods to retrieve the values

$ORACLE_HOME/rdbms/admin/dbmsany.sql

Logical Change Records (LCRs)

Streams Setup

1. CONNECT ddlman/ddlman@saigon 2. CREATE PROCEDURE history_ddl(in_any IN SYS.ANYDATA) IS

lcr SYS.LCR$_DDL_RECORD;

rc PLS_INTEGER;

ddl_text CLOB;

BEGIN

rc := in_any.GETOBJECT(lcr);

DBMS_LOB.CREATETEMPORARY(ddl_text, TRUE);

lcr.GET_DDL_TEXT(ddl_text);

INSERT INTO ddlman.ddl_history

VALUES(SYSDATE, lcr.GET_SOURCE_DATABASE_NAME(), lcr.GET_COMMAND_TYPE(),lcr.GET_OBJECT_OWNER(),

lcr.GET_OBJECT_NAME(), lcr.GET_OBJECT_TYPE(),ddl_text, lcr.GET_LOGON_USER(), lcr.GET_CURRENT_SCHEMA(),

lcr.GET_BASE_TABLE_OWNER(), lcr.GET_BASE_TABLE_NAME(),lcr.GET_TAG(),lcr.GET_TRANSACTION_ID(), lcr.GET_SCN());

COMMIT;

DBMS_LOB.FREETEMPORARY(ddl_text);

END;

/

Create Procedure to “handle” DDL

Streams Setup

1. CONNECT ddlman/ddlman@saigon 2. BEGIN

DBMS_STREAMS_ADM.ADD_GLOBAL_RULES(

streams_type => 'apply',

streams_name => 'apply_ddl',

queue_name => 'ddl_apply_q',

include_dml => false,

include_ddl => true,

include_tagged_lcr => false,

source_database => ‘hcmc’);

END;

/

Create Rules

Streams Setup

1. CONNECT ddlman/ddlman@saigon 2. BEGIN

DBMS_APPLY_ADM.ALTER_APPLY(

apply_name => 'apply_ddl',

ddl_handler => 'ddlman.history_ddl');

END;

/

Hook in the DDL handler

Streams Setup

Definition

Before we can start capturing, propagating, and applying within our Streams environment we must instantiate the destination database. This is nothing more than registering the source SCN with the destination database so it knows the point in time it can start applying captured information.

There are three methods for instantiating objects1. Instantiating and creating database objects by using Export/Import 2. Setting the instantiation SCN for an existing table, schema, or database manually,

by executing procedures in the DBMS_APPLY_ADM package at the destination database

3. Performing a metadata-only export and import for existing tables or schemas You can verify the instantiation by querying DBA_APPLY_INSTANTIATED_OBJECTS.

Instantiate the Stream Environment

Streams Setup

1. CONNECT ddlman/ddlman@hcmc 2. exec dbms_capture_adm.PREPARE_GLOBAL_INSTANTIATION;3. DECLARE

iscn NUMBER;

BEGIN

iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();

DBMS_APPLY_ADM.SET_GLOBAL_INSTANTIATION_SCN@saigon(

source_database_name => ‘hcmc’,

instantiation_scn => iscn,

apply_database_link => ‘saigon’);

END;

/

Instantiate the Stream Environment

Streams Setup

1. CONNECT ddlman/ddlman@saigon

2. BEGIN

DBMS_APPLY_ADM.START_APPLY(

apply_name => 'apply_ddl');

END;

/

BEGIN

DBMS_APPLY_ADM.STOP_APPLY(

apply_name => 'apply_ddl');

END;

/

Stop the Apply Process

• status of apply process set to enabled

• Starts Reader Server PX process (pnn)

• Start background coordinator process (apNN)

• Starts Apply Server PX process(pnn)

Start the Apply Process

Streams Setup

1. CONNECT ddlman/ddlman@hcmc

2. BEGIN

DBMS_CAPTURE_ADM.START_CAPTURE(

capture_name => 'cap_ddl');

END;

/

BEGIN

DBMS_CAPTURE_ADM.STOP_CAPTURE(

capture_name => 'cap_ddl');

END;

/

Stop the Capture Process

1. Set the status of the capture process to enabled

2. Start the background capture process (cpnn)

Start the Capture Process

OEM

Streams

OEM

Streams - Capture

OEM

Streams - Propagate

OEM

Streams - Apply

Sample DDL Session

HCMC-SQL> connect scott/tigger@hcmcConnected.

HCMC-SQL> CREATE TABLE DDL_CHECK_TABLE (COL1 NUMBER);Table created.

HCMC-SQL> ALTER TABLE DDL_CHECK_TABLE ADD (COL2 VARCHAR2(500));Table altered.

HCMC-SQL> TRUNCATE TABLE DDL_CHECK_TABLE;Table truncated.

HCMC-SQL> ALTER TABLE DDL_CHECK_TABLE DROP COLUMN COL2;Table altered.

HCMC-SQL> DROP TABLE DDL_CHECK_TABLE;Table dropped.

Make Some Changes

Sample DDL Session

SAIGON-SQL> SELECT timestamp,substr(source_database_name,1,4) source, 2 logon_user,command_type,object_owner owner,object_name name, 3 object_type type 4 FROM ddl_history;

TIMESTAMP SOURCE LOGON_USER COMMAND_TYPE OWNER NAME TYPE--------- ------ ---------- -------------------- --------- --------------- -----11-OCT-03 HCMC SYS CREATE USER AA USER11-OCT-03 HCMC SYS CREATE TABLESPACE11-OCT-03 HCMC SYS CREATE USER TEMPUSER USER11-OCT-03 HCMC SYS ALTER USER SCOTT USER11-OCT-03 HCMC SCOTT CREATE TABLE SCOTT DDL_CHECK_TABLE TABLE11-OCT-03 HCMC SCOTT ALTER TABLE SCOTT DDL_CHECK_TABLE TABLE11-OCT-03 HCMC SCOTT TRUNCATE TABLE SCOTT DDL_CHECK_TABLE TABLE11-OCT-03 HCMC SCOTT ALTER TABLE SCOTT DDL_CHECK_TABLE TABLE11-OCT-03 HCMC SCOTT DROP TABLE SCOTT DDL_CHECK_TABLE TABLE

View From DDL_HISTORY Table

Sample DDL Session

SAIGON-SQL> SELECT ddl_text 2 FROM ddl_history;

DDL_TEXT---------------------------------------------------------------------------CREATE user aa identified by VALUES '1468620FBA6271E8'create temporary tablespace temp01create user tempuser identified by VALUES '2B4C9C62A2919AEF'alter user scott identified by VALUES 'A7E7E0150C6D5EF3'CREATE TABLE DDL_CHECK_TABLE (COL1 NUMBER)ALTER TABLE DDL_CHECK_TABLE ADD (COL2 VARCHAR2(500))TRUNCATE TABLE DDL_CHECK_TABLEALTER TABLE DDL_CHECK_TABLE DROP COLUMN COL2DROP TABLE DDL_CHECK_TABLE

View From DDL_HISTORY Table

The DBA Nightmare

Remember Me

Sweet Dreams for The DBA

SQL Statements Executed Over Time

Time

Wa

it T

ime

DROP INDEX CREATE INDEX

db file sequential reads

buffer busy waits

parse call

Running Streams

CREATE/ALTER/DROP Tables includes table comments

CREATE/ALTER/DROP Tablespace (requires global rules to be set)

CREATE/ALTER/DROP Indexes

CREATE/ALTER/DROP Triggers

CREATE/ALTER/DROP Views

CREATE/ALTER/DROP Synonyms

CREATE/ALTER/DROP Sequences

Creation of PL/SQL packages, procedures and functions

Changes to users/roles

GRANT or REVOKE on users/roles

COMMIT

ROLLBACK

AUDIT (can be done on user objects)

DDL Types Captured

Running Streams

CREATE , ALTER, or DROP MATERIALIZED VIEW LOG

CREATE , ALTER, or DROP MATERIALIZED VIEW

CREATE or ALTER TABLE for Index-organized tables

CREATE SCHEMA AUTHORIZATION

CREATE or DROP DATABASE LINK

RENAME (use ALTER TABLE instead)

CREATE TABLE ? AS SELECT for clustered tables

DDL Types Captured But NOT Applied

Running Streams

CREATE or ALTER DATABASE ALTER SESSION ALTER SYSTEM TRUNCATE CREATE/ALTER/DROP ROLLBACK CREATE/ALTER/DROP TYPE CREATE/ALTER/DROP PROFILE CREATE/ DROP LIBRARY CREATE/ DROP DIRECTORY SET ROLE SET TRANSACTION SET CONSTRAINT CREATE CONTROL FILE CREATE SPFILE CREATE PFILE ANALYZE EXPLAIN CALL PL/SQL Procedural calls Lock Table

CREATE , ALTER, or DROP MATERIALIZED VIEW LOG CREATE , ALTER, or DROP MATERIALIZED VIEW CREATE or ALTER TABLE for Index-organized tables CREATE SCHEMA AUTHORIZATION CREATE or DROP DATABASE LINK RENAME (use ALTER TABLE instead) CREATE TABLE ? AS SELECT for clustered tables

DDL Types NOT Captured

Running Streams

Setup / Running Streams1. Status of dba_capture & dba_apply where ABORTED2. ORA-01925: Maximum of 30 enabled roles exceeded3. What object is that?4. Mixed case global_name causing mismatch Streams and LogMiner

Remedy1. Stop the Capture and apply processes and start them again2. Increase current value for MAX_ENABLED_ROLES

• ALTER SYSTEM SET max_endabled_roles=1003. Avoid using system generated names

• Creation will work• ALTER / DROP statements will more than likely Fail

4. Change db_name & db_domain to uppercase

Problems You May Encounter

Running Streams

Bugs1. Analyze statement is not propagated on streams environment to target database

with using DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES. 2. DDL Issued through the EXECUTE IMMEDIATE are not applied at destination3. Checkpoints are skipped if there is any DDL activity in the redo logs causing the

capture process to reprocess logs during startup.4. Dropping the streams environment is difficult to cleanup.

Remedy1. Can use DBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES. 2. Do not use dynamic DDL3. Ensure that no DDL has been issued around scheduled shutdowns.4. Get it right the first time or choose a version naming schema.

Problems You May Encounter

Running Streams

Just SomeDBA_QUEUESDBA_QUEUE_TABLESDBA_APPLYDBA_APPLY_PARAMETERSDBA_CAPTUREDBA_CAPTURE_PARAMETERSDBA_PROPAGATIONDBA_APPLY_ERRORDBA_RULESDBA_RULE_SETSDBA_RULE_SET_RULESDBA_JOBSDBA_QUEUE_SCHDULES

Monitoring

NoCOUG Summer Conference, Aug. 19, 2004

4750 Walnut Street, Suite 106 James F. KoopmannBoulder, Colorado 80301 Director of Technical Consulting

303.938.8282 x115720.219.3773 (mobile)[email protected]

Get a handle on DDL using Oracle 9iR2 Streams